Synergy Technology Management
Artificial Intelligence has skyrocketed into public attention with stories of promise and peril for humanity. This is the second instalment of a blog post in two parts that discusses AI’s origins and evolution to the present day.
AI’s Seasonal Cycle
In 1956 a US government-funded workshop was held at Dartmouth College, to establish the formal domain of AI. This event was followed over the next several years by some modest achievements in developing tools capable of human-like performance in narrow fields (e.g., geometrical proofs, algebra, and simple games).
By the early 1980’s commercial AI applications were beginning to appear, primarily in the form of Expert Systems, the first pragmatic application of AI. Most AI companies at the time were start-ups and limited primarily to North America along with AI applied research groups being established in numerous existing large companies in fields such as complex engine maintenance, aerospace design and oil and gas exploration.
Even by this time, it was clear that standard computers were not sufficiently optimised to function as a machine designed to run computer languages designed for executing AI-based rule sets. Consequently several start-ups emerged in the US (e.g., Symbolics, LISP Machines) that offered computer systems optimised for the AI languages of the day, notably LISP and PROLOG.
By the early 1980’s a trade war was shaping up between the United States and Japan, not unlike the current fracas between America and China. As a component of that ‘war’, by 1982 the Japanese Government announced the Fifth Generation Computer Project with the aim of developing computers that could carry on conversations, translate languages, interpret pictures, and reason like people.
In the midst of this first AI spring, named of course in retrospect, the US, Canada and many other countries reacted in a concerned state as to what did this sentient computer project in Japan really mean, especially given the rising influence of Japanese technological prowess. Various nations undertook crash studies of their domestic capabilities and foreign threats vis-à-vis next generation computing and artificial intelligence.
The Fifth Generation Project seemed set to radically accelerate AI advances in hardware, software tools and applications. However, by the late 80’s this project was falling well behind in achieving its goals. At the same time, the anticipated AI capabilities ‘promised’ by the mid-80’s also failed to happen. The net result was a dramatic reduction in funding for basic and applied AI research in many countries. As a result AI slipped into its first winter, which lasted until the turn of the century.
The Recent Past & Into the Future
By the early 2000’s major AI research breakthroughs in machine learning, natural language processing and machine vision where finding their way into commercial applications. This resurgence has been led by the availability of vastly more powerful computing resources (e.g., cloud computing), massive amounts of digital data (e.g., social media) and major advances in developing ‘deep learning’, a sub-field of AI, (e.g., apps include winning against the world’s top Chess and Go game champions). None of these capabilities existed at any significant scale prior to this time.
Artificial Intelligence has evolved over the past 60 years in fits and starts. It is now an emergent disruptive technology with profound applications and implications to many facets of human activity and society with its overall net benefits still unclear. The limits to applying AI are wide open.
We have learned from history how various technologies sparked the industrial revolution. These ‘artificial’ machines dramatically change our polities, societies, economies, individual employment patterns, demography and even our cultures.
Artificial Intelligence is likely to have a similar; if not more profound impact on society and how we are governed. The physical technologies (i.e., machines) of the industrial revolution were powerful engines of change in their own right; but most were single purpose and task specific. None-the-less, they turned everything upside down through the ensuing social ways in which humans re-organized themselves to creatively leverage the opportunities embedded in these machines; for example the rise of the corporation and assembly line manufacturing. This transformative yet unintended consequence induced by the machine may happen again with AI imbedded in society’s products and services.
However, the promise of AI may yet fizzle due to either technical barriers or it may hit a wall brought on by social resistance to rapid changes and social fragmentation, including governance as nations, companies and individuals try to grapple with profound change in the order of things.
Meanwhile, the ethics, efficacy, and philosophical implications of AI to society and the individual are lagging the technological developments and associated applications of AI.
 An Expert System consisted of a data base of specific factual information and a set of rules on how to infer relations within and between the codified data in the system (e.g., data could be pathology samples and the inference rules would be the ways and means by which the pathologist diagnosis the data at hand (i.e., applied expert knowledge).