Synergy Technology Management
Artificial Intelligence (AI) has suddenly appeared from ‘erewhon’. What is it? Where did it come from? Why is it important? This blog post in two parts will provide context and answers to these questions.
Artificial Intelligence has skyrocketed into public attention with stories of promise and peril for humanity. Political leadership is declaring AI a strategic national asset. Governments around the world are pouring billions of dollars into AI research and related industry development. At the same time, companies are investing billions in applied AI research, product development and services enhanced through the use of AI (e.g., voice recognition services).
Artificial Intelligence is the branch of computer science that is concerned with the automation of intelligent behavior. In other words, automation of activities that we associate with human thinking and actions.
The traditional domains of AI include developing and mimicking reasoning and decision making, understanding knowledge, planning, and learning. AI also includes natural language processing (e.g., speech recognition, translation), sensory perception (e.g., facial recognition) and the ability to move and manipulate objects (e.g., autonomous mining vehicles).
Where did AI come from?
AI is not new, it has been around in academic circles since the mid 1950’s with origins that go back centuries. Since its modern inception AI has vacillated between periods of research progress with business applications and uptake followed by apparent failure due to over promise in the research community leading to a general collapse of the commercial exploitation of AI. This pattern created a cycle whimsically referred to as AI springs and AI winters. That is, until the early 2000’s when we entered the 3rd AI Spring. Consequently, competition is heating up between nations and companies.
Origins: Literature - Philosophy - Mathematics - Engineering
Like so many things that seem new when they first appear, there is often a long history behind their origins. AI is no exception. Here are a few notable examples. In 1726 Jonathan Swift, an Anglo-Irish writer, published Gulliver’s Travels, a tale of travel adventure in which he describes an engine “for improving speculative knowledge by practical and mechanical operations”. This was one of the first written references to using the word ‘engine’ in the context of some kind of physical device. Previously it had an abstract meaning as in ‘ingenuity, artfulness; cunning, trickery’.
By 1763 an Irishman, named Thomas Bayes, developed a framework for reasoning about the probability of events, which today is a leading approach in machine learning, an important sub-field of AI. Then in 1854, George Boole an English mathematician, developed the idea that logical reasoning could be performed systematically in the same manner as solving a system of equations. By 1898, Nikola Tesla, a Serbo-Croatian, demonstrated the first radio controlled boat, with what he called a “borrowed brain” as the control. This was one of the first engineered tests of controlling a mobile machine remotely.
Returning to literature, by 1921 the Czech writer Karl Capek coined the word ‘robot’ in his play R.U.R. – Rossum’s Universal Robots. This dystopian play sets the stage for the rise of the Terminator and Matrix worlds of modern sci-fi.
On a more practical level, Houdina Radio Control, an American firm, demonstrated a radio-controlled driverless car on the streets of New York City in 1925. One can see the radio antenna mounted on the 1926 Chandler in the image below. It has a haunting similarity to the antenna on a Google driverless car today.
The radio-operated 1926 Chandler automobile, called American Wonder. Source: https://en.wikipedia.org/wiki/Houdina_Radio_Control
None of these developments are AI specific; but rather serve as precursors to an evolving future awaiting the arrival of new science and technology.
The computer age was just beginning. For example, by 1943, Americans Warren McCulloch and Walter Pitts created a computational model for neural networks based on mathematics and algorithms called threshold logic. This model paved the way for neural network research to split into two approaches. One focused on biological processes in the brain while the other studied the application of neural networks to artificial intelligence.
Then in 1950 Claude Shannon, an early American computer scientist, published the first paper on developing a chess-playing computer program. Also in 1950, Alan Turing, a British mathematician and philosopher published a paper called Computing Machinery and Intelligence in which he proposed what became known as the Turing Test. This was the first formalised approach to test a machine's ability to exhibit intelligent behavior equivalent to or indistinguishable from that of a human.
From the 1950’s through the 1960’s AI was being codified for the first time in computer programs such as American Arthur Samuel‘s program that learned on its own how to play checkers. AI researchers became so enthusiastic that by 1965, AI pioneer and Nobel Laureate Herbert Simon predicted that “machines will be capable, within twenty years, of doing any work a man can do”. In the spirit of such promise the late 60’s movie 2001: A Space Odyssey was released and featured HAL, a sentient computer with nefarious behaviour towards humans – just like in R.U.R. the intelligent machine turns on its creator. Well, neither the promise of Simon nor the reality of HAL have yet come about – neither by 1985 nor by 2001, or even today.
 Erewhon is a play on the word ‘nowhere’ and the title of an 1872 novel by Samuel Butler. He was the first to address the possibility that machines might develop consciousness by Darwinian selection. Source: "Darwin among the Machines", reprinted in the Notebooks of Samuel Butler at Project Gutenberg