The term artificial intelligence (AI) is becoming ubiquitous, yet many of the concepts underlying these disparate technologies have been around for decades. Techniques such as neural networks, which loosely imitate how the brain processes information, or genetic algorithms, which borrow from evolution to find better solutions, have been researched and discussed for many years. Even machine learning, something of a buzzword, still follows the basic principles of using a computer to understand patterns in data and find similar patterns in new data.
The big change is not in the theoretical principles but rather the scale. Larger datasets, faster hardware, and more sophisticated engineering have made existing methods far more capable than they ever were.
This growth in scale has created new opportunities, but has not yet overcome some of the long-standing weaknesses. Most systems remain very good at narrow tasks, spotting faces in photos, translating or summarising text, for instance, but cannot necessarily cope with bigger problems that require broad reasoning, intuition, or a detailed understanding of context. Moreover, AI tools can make mistakes and even “hallucinate”, offering answers and responses that do not mesh with reality or facts, and sometimes more worrying reflect inherent biases in their training data. There is a popular belief that AI is fast approaching human-like intelligence, but have to assume that such sophistication is still some way off.
Writing in the International Journal of Information and Operations Management Education, a UK team has looked at AI in the context of technological history, and it seems to follow a similar path to that taken by transport, communication, and other areas. Development and uptake generally follow an S-shaped curve as they progress. They start off flat and slow, but then there is a sudden, rapid burst of progress, following by a levelling out on to a plateau as key design principles settle.
At the moment, commercial priorities are pushing companies in almost every sector to adopt AI tools in order to chase profits and efficiency rather than long-term social benefit. Public enthusiasm is split among those who consider it a positive evolution in computers to those who see it as demeaning and degrading human creativity and activity. Many people in both camps have unrealistic expectations, assuming the best or the worst, whereas the truth is logically fuzzy, one might say.
Increasingly, the best results come from hybrid approaches, where we can use AI to support expert judgement in medical diagnostics and engineering, for instance, rather than allowing it to generate answers to problems without the requisite checks and balances. The value of AI will reveal itself in how well it serves people. Progress will come through practical advances in tools that help humans make better decisions, rather than machines that claim to think for us.
AI is far more than a generator of text, images, or music. In fact, generative AI, by some definitions, is not “intelligence” at all, but mimicry based on statistics. True AI, as the technology currently stands, lies in pattern recognition, prediction, optimization, and problem-solving. It is the technology that detects disease in medical scans, forecasts supply and demand, fine-tunes transport logistics, and supports critical decision-making across countless domains. Its true promise will be realised not in imitation of human creativity, but in offering up clues and insights that allow people to think, act, and innovate more effectively.
Rugg, G. and Skillen, J.D. (2025) ‘The limits to growth for AI’, Int. J. Information and Operations Management Education, Vol. 8, No. 1, pp.61–73.