Irrespective of the ethics and the apocalyptic predictions, artificial intelligence (AI) has already become a central component of economic and institutional decision-making. Research in the International Journal of Intelligent Systems Design and Computing has gone beyond an industry-specific analysis of the state-of-the-AI-art and offers a detailed framework of how the many different AI tools are being adopted.
The main point that arises from the analysis is that while AI technologies are being used widely across sectors, organizations do not yet have a strategy that allows AI to be integrated in a way that balances innovation with accountability.
AI encompasses so-called machine learning for recognising patterns in data, natural language processing that can interpret and human language, and generative tools that produce text, images, video, computer code, and other output. All these tools are changing many sectors from healthcare diagnostics to processing industrial and financial data, to produce hit pop songs and accompanying videos.
Education and business operations are undergoing similar shifts. Adaptive learning platforms in education adjust course material to suit the way individual students learn. In retail and logistics, AI is being used to refine supply chains, manage inventory, and personalize the customer “experience”. Even in the world of law, law enforcement is using AI to assess crime scenes and weigh evidence, while judges are using these tools to summarise their concluding remarks from massive briefs.
One of the most pressing issues highlighted by the research is data privacy, as AI systems depend on large volumes of often sensitive and personal information. In addition, there is the notion of algorithmic transparency, wherein we are are losing the ability to understand how a given AI system is arriving at a specific decision. Indeed, many of the most advanced AI models now work essentially as black boxes, meaning their internal processes simply cannot be interpreted…perhaps without resorting to another AI to do the interpretation! Such a lack of transparency might undermine trust in high-stakes contexts such as medical diagnoses or judicial decisions.
To address the issues, the researchers propose a framework based on stakeholder theory, which maintains an emphasis on the importance of all parties affected by the decisions AI might make. In the business context, they stress that organisations should bot focus solely on efficiency or profit, they must have perspective that them to weigh the interests of employees, customers, regulators, and society at large when adopting AI. This might only come about, of course, with governance, regulations, and ethical obligations.
Idemudia, E.C. (2025) ‘Artificial intelligence’s effect and influence on multiple disciplines and sectors’, Int. J. Intelligent Systems Design and Computing, Vol. 3, Nos. 3/4, pp.254–274.
No comments:
Post a Comment