Geopolitics & Artificial Intelligence: A New Reality

Posted on Tuesday, July 20, 2021

Author: Peter MacKinnon

Peter MacKinnon

Managing Director, Synergy Technology Management
Chair, Foresight Synergy Network hosted by Telfer School of Management, uOttawa
 

Geopolitics has traditionally been framed against diverse elites with vested interests competing for control of geographyand natural resources.  This has been the case for thousands of years, from the days of the earliest empires to the nation and city-states of today. However, geopolitics is now taking on a new dimension, one defined by an emerging competition among states and elements of their corporate sectors to control and use massive amounts of data about people, places and things.  The type of control inferred here comes from the ability to amass, mine, and use such data for a wide range of strategic and tactical needs that are dominantly those of the state and large enterprise.

Data has become the newest global commodity.  

To control data in these ways is a consequence of an emergent ability to apply the techniques and tools of what has come to be called Artificial Intelligence, AI for short.[1] AI is not new in concept or understanding. It has been an academic topic for more than 60 years. What is new is the manifestation of AI as a rapidly developing technology that can find patterns in data that are normally associated with human intuition and skill. For example take games, a perpetual human pastime. Today’s best players of strategy games like Chess and Go are no longer master human players, they are AI-based systems.  

In recent years, worldwide news coverage of AI has skyrocketed with stories of promise and peril for humanity. Governments around the world are pouring billions of dollars into AI research and related industry development; while companies are investing billions in applied AI research, product development and enhanced AI-related services. Political leadership is declaring AI as a strategic national asset.  Competition is heating up between companies and nations.  

AI is clearly on the rise. It has become imbedded in many applications and services that are being used by billions of people in areas such as internet search, customer service inquiries, mining of personal online information, recommending entertainment, and other services often based on one’s personal digital information. For example, the use of automated personal assistant devices now stands at over a billion users (e.g., Cortana and Siri from America, iFlytek from China, and Alisa from Russia).  

Human history can be characterised as evolving from a hierarchy based on those who sought to control land, then machines, and now the collection and manipulation of data. That journey is metaphorically marked by a transition from manual labour of man and beast through machine-assisted labour saving devices and more recently to the emergence of cognitive tools for societal good on the one hand and tools to deceive and alter facts, opinions and perspectives on the other hand. Consequently, today vast arrays of machines are being empowered with AI to extend their capabilities in providing societal benefits while at the same time enabling other machines to become weapons for criminal, espionage and military uses.  

Real-world applications of AI have accelerated during the last decade due to three concurrent developments consisting of better software algorithms, a massive increase in computing power, and the ability to capture, store, and manipulate vast amounts of data (e.g., big data and data analytics integrated with AI).  

AI is disruptive in many ways. In the past, automation involved industrial robots and computer systems that were designed to do predictable, routine and codifiable tasks. Today, artificial intelligence is poised to take on a greater share of jobs across a range of tasks that currently are the domain of humans. These are tasks requiring problem solving, decision making and interaction within a less than fully predictable environment. Automation of this sort includes self-driving vehicles and diagnosing diseases. Consequently, there are reasons for concern, both technical and socio-economic. By example, AI capabilities have profound implications to the future of work for both semi-skilled as well as highly-skilled workers. This is emerging as a serious global public policy issue as exemplified by numerous basic income initiative worldwide. Tax the robots? Maybe.  

Furthermore, this transition to intelligent machines is occurring during an era in which many workers are already struggling to maintain a full-time job, which has been exacerbated further due to the global pandemic. In addition, ‘automation anxiety’ is made more acute by labour markets in many countries that have tilted against workers over the last 30 years, with increasing income inequality and nearly stagnant real wages.  

We have learned from history how various technologies sparked the industrial revolution: water mills, the cotton gin, steam engines, etc. These machines dramatically changed our polities, economies, employment patterns, demography, and even our cultures.  The new machines of the industrial revolution were powerful engines of change in their own right.  However, what really turned everything upside down were the social ways in which humans re-organized themselves to creatively leverage the opportunities embedded in these machines. AI is likely to have a similar if not more profound impact than the seeds sown by the industrial revolution.  

Today, AI is high on the national agenda of many countries, positioned as a tool in shaping national interests in both overt and covert ways. As a consequence, AI has become a geopolitical pawn in a new power game among states. For example, China has set a goal to be the world’s leader in AI by 2030 and the Russian President has stated that the country that dominates AI will control the world. The stakes are high and the outcomes are yet uncertain. 

At the same time, the ‘AI truth meter’ for a considerable swath of the global public is riddled with misunderstanding about AI; ranging from anticipated massive loss of jobs without replacement, perceptions of digital surveillance through loss of privacy and trust to the rise of superintelligence and fear of machines taking over.  

AI is not moving in a single ethical direction; like many technologies, it is being pursued by diverse groups of players of all description, including non-state actors such as criminal gangs and terrorist groups, each anticipating positive outcomes in using AI to achieve their perceived goals.  

AI in its many forms also has enormous potential to disruptive the current model of the governance of the state and add to geopolitical competition by use of hard to discriminate fake news, ever-increasing echo chambers of social segmentation through social media, to election meddling through digital maleficence.  Thus, there are significant risks that must be managed through a combination of technical design and policy-making instruments in order to maximize AI’s benefits for any given society while protecting its ethical and social values (e.g., establishing liability with respect to creation and use of AI applications).  

As a precautionary note, the promise of AI may yet fizzle due to technical barriers or hit a wall brought on by social resistance to rapid changes and fragmentation as nations, companies and individuals try to grapple with this emerging and profound change in the order of things.

The author can be reached at mackinnon.peter@gmail.com.

 

Back to top