The TechCrunch Global Affairs Project examines the increasingly intertwined relationship between the tech sector and global politics.
Geopolitical actors have always used technology to further their goals. Unlike other technologies, artificial intelligence (AI) is far more than a mere tool. We do not want to anthropomorphize AI or suggest that it has intentions of its own. It is not — yet — a moral agent. But it is fast becoming a primary determinant of our collective destiny. We believe that because of AI’s unique characteristics — and its impact on other fields, from biotechnologies to nanotechnologies — it is already threatening the foundations of global peace and security.
The rapid rate of AI technological development, paired with the breadth of new applications (the global AI market size is expected to grow more than ninefold from 2020 to 2028) means AI systems are being widely deployed without sufficient legal oversight or full consideration of their ethical impacts. This gap, often referred to as the pacing problem, has left legislatures and executive branches simply unable to cope.
After all, the impacts of new technologies are often hard to foresee. Smartphones and social media were embedded in daily life long before we fully appreciated their potential for misuse. Likewise, it took time to realize the implications of facial recognition technology for privacy and human rights violations.
Some countries will deploy AI to manipulate public opinion by determining what information people see and by using surveillance to curtail freedom of expression.
Looking further ahead, we have little idea which challenges currently being researched will lead to innovations and how those innovations will interact with each other and the wider environment.
These problems are especially acute with AI, as the means by which learning algorithms arrive at their conclusions are often inscrutable. When undesirable effects come to light, it can be difficult or impossible to determine why. Systems that constantly learn and change their behavior cannot be constantly tested and certified for safety.
AI systems can act with little or no human intervention. One need not read a science fiction novel to imagine dangerous scenarios. Autonomous systems risk undermining the principle that there should always be an agent — human or corporate — who can be held responsible for actions in the world — especially when it comes to questions of war and peace. We cannot hold systems themselves to account, and those who deploy them will argue that they are not responsible when the systems act in unpredictable ways.
In short, we believe that our societies are not prepared for AI — politically, legally or ethically. Nor is the world prepared for how AI will transform geopolitics and the ethics of international relations. We identify three ways in which this could happen.
First, developments in AI will shift the balance of power between nations. Technology has always shaped geopolitical power. In the 19th and early 20th century, the international order was based on emerging industrial capabilities — steamships, airplanes and so on. Later, control of oil and natural gas resources became more important.
All major powers are keenly aware of the potential of AI to advance their national agendas. In September 2017, Vladimir Putin told a group of schoolchildren: “whoever becomes the leader [in AI] will become the ruler of the world.” While the U.S. currently leads in AI, China’s tech companies are progressing rapidly and are arguably superior in the development and application of specific areas of research such as facial recognition software.
Domination of AI by major powers will exacerbate existing structural inequalities and contribute to new forms of inequity. Countries that already lack access to the internet and are dependent upon the largesse of wealthier nations will be left far behind. AI-powered automation will transform employment patterns in ways that advantage some national economies relative to others.
Second, AI will empower a new set of geopolitical players beyond nation states. In some ways, leading companies in digital technology are already more powerful than many nations. As French President Emmanuel Macron asked in March 2019: “Who can claim to be sovereign, on their own, in the face of the digital giants?”
The recent invasion of Ukraine provides an example. National governments responded by imposing economic sanctions on the Russian Federation. But arguably at least as impactful were the decisions of companies such as IBM, Dell, Meta, Apple and Alphabet to cease their operations in the country.
Similarly, when Ukraine feared that the invasion would disrupt its internet access, it appealed for assistance not to a friendly government but to tech entrepreneur Elon Musk. Musk responded by turning on his Starlink satellite internet service in Ukraine and delivering receivers, enabling the country to continue to communicate.
The digital oligopoly, with access to large and growing databases that serve as the fuel for machine learning algorithms, is fast becoming an AI oligopoly. Given their vast wealth, leading corporations in the U.S. and China can either develop new applications or acquire smaller companies that invent promising tools. Machine learning systems might also be helpful to the AI oligopoly in circumventing national regulations.
Third, AI will open possibilities for new forms of conflict. These range from influencing public opinion and election results in other countries through fake media and manipulated social media postings, to interfering with the operation of other countries’ critical infrastructure — such as power, transportation or communications.
Such forms of conflict will prove hard to manage prompting a complete rethink of arms control instruments not suited to grapple with weapons of coercion. Current arms control negotiations need the adversaries to clearly perceive each other’s capabilities and their military necessity, but while nuclear bombs, for example, are limited in their development and application, almost anything is possible with AI, as capabilities can develop both quickly and opaquely.
Without enforceable treaties restricting their deployment, autonomous weapons systems assembled from off-the-shelf components will eventually be available to terrorists and other non-state actors. There also exists a significant likelihood that poorly understood autonomous weapon systems might unintentionally initiate conflicts or escalate existing hostilities.
The only way to mitigate AI’s geopolitical risks and provide the agile and comprehensive oversight it will require, is through open dialogue about its benefits, limitations and complexities. The G20 is a potential venue, or a new international governance mechanism could be created to involve the private sector and other key stakeholders.
It is widely recognized that international security, economic prosperity, the public good and human well-being depend on managing the proliferation of deadly weapon systems and climate change. We believe they will increasingly depend at least as much on our collective ability to shape the development and trajectory of AI and of other emerging technologies.