Miscalculation at Machine Speed?
...
...

How might artificial intelligence affect the delicate nuclear balance? This article explores the potential impacts of AI advancements on nuclear deterrence, focusing on the risks and opportunities AI introduces to decision-making, early warning systems, and nuclear targeting. With AI’s speed and precision, the possibility of destabilising mutual deterrence and escalating conflicts looms large. The article assesses whether AI integration can genuinely enhance strategic stability or if it threatens to undermine established nuclear safeguards, calling for careful governance in this high-stakes domain.
MLA
I'm a paragraph. Click here to add your own text and edit me. It's easy.
CHIGACO
I'm a paragraph. Click here to add your own text and edit me. It's easy.
APA
I'm a paragraph. Click here to add your own text and edit me. It's easy.
Miscalculation at Machine Speed?
– Artificial Intelligence and the Nuclear Balance of Power –
Research question:
What is the effect of advancing Artificial Intelligence systems on the Nuclear Balance of Power?
Abstract:
The Cold War nuclear balance prevented the escalation of a great power competition to nuclear conflict. Today, AI and Machine Learning systems challenge nuclear stability, posing new risks. The use of advanced AI in military operations could streamline attack and interception capabilities but also lead to misinterpretations and dependencies. Enhanced AI targeting might undermine mutual assured destruction, increasing pre-emptive strike risks and destabilising nuclear deterrence. Further actions must be taken to prevent an AI-induced and enabled nuclear arms race.
Introduction
The nuclear balance between the United States and the Soviet Union was one, if not the main, reason the Cold War did not escalate into a full conventional and nuclear conflict. Now, the world again enters an era of great-power competition, and with it, the risk of nuclear confrontation increases once again. This time, the odds that a nuclear balance will lead to stability seem less promising. Certain actors, such as the Russian Federation, frequently threaten the use of nuclear weapons. This erosion of the nuclear taboo goes hand in hand with the abandonment or non-extension of several significant arms control treaties. These include the Intermediate-Range Nuclear Forces (INF) Treaty, the Treaty on Open Skies, and the Iran Nuclear Deal. Other treaties such as New START are suspended. This weakening of the arms control regime is a critical challenge to international security. Furthermore, the rapid advances in the development of artificial intelligence (AI) represent a significant yet often overlooked risk (Johnson & Krabill, 2020). The ability of AI systems with increasing computational power to mimic human thinking represents an opportunity to enhance military operations by enabling faster and more comprehensive data processing from multiple sources (Saltini, 2024).
The issue of AI in the military in general and specifically in nuclear weapon systems is of high relevance and controversially debated. There most likely is an opportunity for militaries to gain a strategic advantage by integrating AI in their nuclear decision making processes. However, these systems introduce a new source of uncertainty on a technical and a strategic level. Furthermore, their failure could lead to miscalculation and escalation in a crisis or conflict (Hoffman & Kim, 2023). This paper contributes to the ongoing debate by further examining the effect of advanced artificial intelligence systems on the nuclear balance of power. It will first discuss Nuclear Deterrence Theory, that is how nuclear deterrence works and what the nuclear balance is. Additionally, AI and the surrounding terms such as ML will be defined. In the second part, the potential applications of AI/ML systems in the nuclear environment and their implications will be highlighted. Lastly, there will be an overview of the current state of the use of AI in the military domain in China, Russia, and the USA.
The Fundamentals of Nuclear Deterrence Theory
The balance of power has long been a fundamental concept in International Relations, dating back to ancient times. The Greek historian and general Thucydides (c. 460 BC to 400 BC) explored this idea in his famous work The History of the Peloponnesian War, where he argued that the rise of Athens and the fear it instilled in Sparta made conflict inevitable. This concept of rising and falling hegemons is still relevant today. With the advent of nuclear weapons and their immense destructive potential, the nuclear balance of power became a central feature of international politics. This importance of nuclear weapons and their relationship to power even had structural effects on the international order, as the five permanent members of the United Nations Security Council (the US, Russia, China, the UK, and France) are all nuclear weapon states.
To understand the (nuclear or strategic) balance of power it is necessary to understand coercion. Coercion is the use of implicit or explicit threats to influence another actor's behaviour (Williams, 2013). In other words: "Coercion is the ability to get an actor ... to do something it does not want to do" (Greenhill & Krause, 2018). The two sub-strategies of coercion are compellence and deterrence. Compellence involves threatening or imposing costs to persuade another actor to change their behaviour. Deterrence is threatening or imposing costs to persuade another actor not to undertake a certain action. One of the most prominent examples of deterrence is for a state to threaten retaliation against an adversary that uses violence against it. Consequently, nuclear deterrence refers to the doctrine where the fatal retaliatory potential of nuclear weapons prevents nations from launching a nuclear attack against their adversaries (Carnegie Council, 2024). Nuclear deterrence is primarily based on the concept of mutually assured destruction (MAD). MAD is the doctrine that a nuclear attack by an attacker on a nuclear-armed defender with second-strike capabilities would result in an overwhelming nuclear retaliation with the effect of the complete annihilation of both the attacker and the defender (Encyclopaedia Britannica, 2024b; Parrington, 1997). Second-strike capability refers to the ability to “survive a first strike with sufficient resources to deliver an effective counterblow” that is to retaliate an opponent's nuclear attack (Oxford Dictionaries, 2002).
The fear of mutual destruction added a new nuclear weight to the postwar balance of power. This notion can also be defined as strategic stability, the absence of incentives for any country to launch a first nuclear strike (Trenin, 2019). MAD can be contrasted with nuclear utilisation target selection (NUTS). This idea states that MAD is not credible in the case of a small attack for example by a tactical warhead because MAD is an inherently suicidal strategy. Under the assumption of MAD, if an attacker destroys one city, the defender would have to retaliate by destroying all of the attacker's cities. This in turn would lead to the attacker destroying all cities of the defender. NUTS argues that this is not credible. While the loss of one city is tragic, it is not logical to sacrifice all others in the pursuit of retaliation. The consequence is that NUTS assumes that a limited nuclear war is possible and properable.
Nuclear doctrine refers to “the fundamental principles regarding nuclear weapons use by which nuclear-armed states guide their military actions in support of national objectives” (Oxford Dictionaries, 2002). Within nuclear doctrine, there is a conceptual separation between countervalue targeting and counterforce targeting. Countervalue targeting is the deliberate targeting of an enemy’s cities and civilian population with nuclear weapons (Encyclopaedia Britannica, 2024a). Counterforce targeting on the other side intends to “ destroy the military capabilities of an enemy force. Typical counterforce targets include bomber bases, ballistic missile submarine bases, intercontinental ballistic missiles (ICBM) silos, air-defence installations, command and control centres, and weapons of mass destruction storage facilities” (U.S. Department of Defense, 2020). The difference is crucial because it determines how the nuclear forces will be organised and what the nuclear arsenal entails. In theory counterforce targeting offers the opportunity for a “clean nuclear war”, limiting civilian casualties, although this has been disputed (Beavers, 1974). Additionally, one of the objectives is to limit or prevent the second-strike capability on an opposing force. Counterforce doctrine requires much more complicated processes and much more precise weapon systems than countervalue. This is due to the fact that states tend to protect their nuclear assets much more than their cities.
Systems for Nuclear Weapons Delivery
Nuclear weapon systems consist of multiple components. The central piece is the nuclear warhead. The (nuclear) delivery vehicle is the mechanism that transports the nuclear warhead from launch to target. These are usually a bomb, missile, or a torpedo. Especially important are intercontinental ballistic missiles (ICBMs) and submarine-launched ballistic missiles (SLBMs). The delivery vehicles are launched from nuclear launch platforms such as aircraft, submarines, land-based mobile transporter erector launchers (TELs) or silos.
In nuclear deterrence theory, the nuclear triad is the most desirable form of strategic nuclear weapons arsenal. Triad refers to the ability to launch nuclear warheads from launchers on the ground, submarines, and strategic bombers. The nuclear triad reduces the possibility that an enemy can disarm a defender, that is, destroy all of a nation's nuclear forces in a first-strike attack. Reducing the possibility an opponent can destroy one’s nuclear forces increases the credibility of a nation's nuclear deterrence because it ensures a credible threat of a second strike.
The Fundamentals of Artificial Intelligence
AI is currently highly popularised, and it seems that no physical or digital product comes without an AI supplement, but the term itself remains unclear in its exact meaning. In the common language, AI can refer to computational processes that perform functions usually completed by people (Hruby & Miller, 2021). However, this is a rather wide definition. It is important to differentiate AI systems from more traditional software which only follows programmed rules to automatically execute operations (European Union, 2024). Regulation (EU) 2024/1689 of the European Parliament and of the Council (also referred to as the AI Act) defines AI as:
a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments (European Union, 2024, p. 46)
The crucial differentiation from traditional technology is consequently twofold. Firstly, the ability to adapt to new input without further programming and, secondly, the ability to extrapolate outputs based on the information it receives. AI systems can be further divided into two groups, narrow AI and artificial general intelligence (AGI). Today’s AI systems are considered narrow. They execute rules-based commands (first-wave AI) or use data and machine learning (ML) techniques to categorise information (second-wave AI) (Hruby & Miller, 2021). Rules-based command means that a human crafts “if-then” rules which then are followed by the AI system. This form of AI is labour-intensive because the rules need to be manually crafted but are used for high-risk applications such as aircraft autopilot systems. ML identifies patterns in relatively large data sets through inference, and results are probabilistic (Hruby & Miller, 2021). It has been argued that within nuclear (weapon) systems, AI most likely will be rules-based as there is no room for error. The challenge is that rules-based systems often perform poorly in unpredicted situations (Horowitz et al., 2019). AGI would entail AI systems which would be able to do everything humans can. Instead of being trained to do a specific task such as playing chess or driving a car, they would be able to: “[autonomously] outperform humans at most economically valuable work” (OpenAI, 2024). However, it is estimated to be reached before the mid or late 21st century.
AI systems are already used in various contexts within the military. Here it is useful to differentiate between two main categories: supporting purposes and in the weapon. Supporting purposes include wargaming, logistics, and maintenance. More prominently, it also includes the decision making process of nuclear use. The question how much decision making power can be transferred to an AI based assistance system is controversial because of the ethical implications. Nevertheless, AI will most likely be used for the localisation and selection of targets. More prominently discussed is the use of AI directly in weapon systems. There already has been some conversation about lethal autonomous weapons (LAWs) and in 2021 the first use of an autonomous weapons system was documented by the U.N. Panel of Experts on Libya (Panel of Experts on Libya, 2021). In both applications, the decision making process and the weapon itself, AI could have an impact on strategic stability. However, the uncertainty regarding the decision making process makes it probably more significant.
A key issue in understanding autonomous weapons, and the implications for nuclear stability, is the classification of the degree of human control over the weapon systems. To illustrate how decisions are made, the OODA (Observe, Orient, Decide, and Act) loop is used (Daniels, 2021). Fundamentally, there are three levels of automation: human-in-the-loop (the system is not fully autonomous), human-on-the-loop (a human can intervene and abort an action), and human-out-of-the-loop (no human action is involved). The degree of human involvement will shape how AI is integrated into future military strategies and its broader ethical and strategic implications.
Potential Applications of AI in Nuclear Weapon Systems
Assessing the impact of AI on nuclear stability is a highly challenging endeavour for multiple reasons. Firstly, AI is a new and rapidly changing field of technology. Secondly, states tend to be secretive about the technologies involved in their nuclear programs. Thirdly, it is very hard to assess potential strategic implications of what state A will think if state B does something. Nevertheless, in the following there will be an overview of what researchers have discussed as potential applications of AI in nuclear weapon systems.
AI in the Localisation of Nuclear Forces
One of the most significant impacts of AI on strategic stability is its use in locating nuclear launch platforms, particularly intercontinental ballistic missiles (ICBMs) and submarine-launched ballistic missiles (SLBMs). To prevent a disarming strike by an opponent, a state has two options: hardening and concealing. In the past nuclear weapons were often hardened, e.g. stored underground in silos for protection. However, modern conventional or tactical nuclear warheads can destroy such silos. An example for this is the GBU-57A/B MOP "bunker buster" bomb used by the United States Air Force. The precision and thus threat of these weapons might increase further if aided by AI guiding systems. Therefore, many nuclear powers favour smaller mobile launchers which rely on concealment. These mobile launchers can be rapidly moved and therefore are difficult to track and target. Especially SLBMs play a key role in the preservation of second-strike capabilities for retaliation.
The improvement of AI systems and their implementation in Intelligence, Surveillance and Reconnaissance (ISR) could also lead to increasingly effective localisation of mobile missile launchers (Geist & Lohn, 2018). AI is expected to enhance ISR capabilities by improving pattern recognition in satellite imagery and real-time data analysis. AI algorithms could compare satellite data with historical data to ensure decision-makers are alerted to changes (Cook, 2021). While this can be done manually, AI can do so in real time and with much larger datasets. AI systems can analyse vast amounts of data to identify movement patterns or anomalies in terrain, making it easier to detect mobile missile launchers that might otherwise blend into the environment or relocate frequently to avoid detection.
Even if AI is only perceived to improve the localisation of mobile nuclear launch platforms for the neutralisation with nuclear or conventional weapons it would shift the nuclear balance (Horowitz et al., 2019). If one state possesses the ability for counterforce targeting, that is to destroy another state’s second-strike capability, MAD is no longer guaranteed and hence the nuclear balance would be disturbed. If State A is confident that it can locate and either destroy or intercept all, or at least a significant number, of State B’s nuclear weapons, then State B's deterrence has failed. This is, because State B can no longer credibly threaten State A with nuclear retaliation. At the same, if state B believes that state A possess this capability, State B also might be incentivised to escalate, as it fears being limited in employing its second-strike capabilities in the future because of State A’s capability to destroy State B’s nuclear assets (Lieber & Press, 2017). Because it is unclear if the opponent can localise and intercept one's own nuclear weapons there is high uncertainty and consequently a great risk of escalation. Catastrophic miscalculations could be the consequence. As Krabill and Johnson put it: “(E)ven a modicum of uncertainty about the effectiveness of AI-augmented cyber capabilities during a crisis or conflict would, therefore, reduce both sides’ risk tolerance, increasing the incentive to strike preemptively” (2020). AI-based detection of nuclear weapons facilities is especially a challenge for Russian and Chinese defence planners, as both states rely primarily on mobile ICBM launchers for deterrence (Geist & Lohn, 2018). This development that reliable MAD might become undermined by advanced AI-enabled counterforce targeting could disturb the nuclear balance and have escalatory effects.
AI in the Decision-making Process
A second prominent imaginable use of AI with effects on nuclear stability is its implementation into nuclear command, control, and communications (NC3) systems. NC3 refers to the “means through which authority is exercised and operational command and control of nuclear operations is conducted” (U.S. Air Force, 2020). Most experts and policy makers agree that in the case of nuclear weapons there always has to be a human-in-the-loop and it is not acceptable strategically and ethically if the system is human-out-of-the-loop. The case of human-on-the-loop is much less clear, even though there is a risk that humans will end up as bystanders because all the speed that is won through the AI system is lost again if the human needs to approve it. Most experts and policy makers also agree that the pressures of the nuclear decision making process will lead to the implementation of as much available technology as possible to gain the greatest edge. Consequently, it is likely that AI systems will play the role of “trusted advisor”.
Related to the above mentioned improved localisation of nuclear weapons, AI capabilities can facilitate the detection of nuclear attacks and therefore will improve early warning capabilities of states due to the ability to process large quantities of data. The North American Aerospace Defense Command has less than three minutes to assess and confirm initial indications from early-warning systems of an incoming attack (Johnson & Krabill, 2020). Any tool which can streamline this process and buy seconds of time would be invaluable to decision-makers. Others argue that the decision making process is already technologically streamlined and that AI systems can only add incremental improvements to the speed and quality of information processing (Sankaran, 2019). What takes time is that the information about a possible attack has to go up the (human) command chain. From this perspective, it can be argued that AI systems only can fasten the decision making process if they leave out human control levels. Even when kept in the loop, humans could end up as passive observers because any human interaction would cost too much time.
Furthermore, there is the issue of automatisation bias. Automation bias means that human decision-makers have a “tendency to disregard or not search for contradictory information in light of a computer-generated solution that is accepted as correct” (Cummings, 2004). This is especially valid in contexts under time constraints. It has been shown in a study on error rates in flight simulators that “participants in non-automated settings out-performed their counterparts with a very but not perfectly reliable automated aid on a monitoring task” (Skitka et al., 1999).
One might wonder if trust in advanced AI systems is problematic. However, it is crucial to remember that in the publicly known instances in which the world came close to nuclear war, it was human decision-making based on “gut-feeling” that was what prevented nuclear war. Prime examples for such close calls include the Cuba Crisis, the Able Archer 83 incident, the Black Brant scare, and the 1983 Stanislav Petrov incident. In 1983, Stanislav Petrov monitored the soviet “Oko” missile defence early-warning satellite system, which alerted him with high confidence that five U.S. intercontinental ballistic missiles were approaching. Petrov reported this notification as a false alarm and thereby prevented a likely counterattack. Investigations found that the satellites had mistaken the sun’s reflection in clouds for launching missiles. In 2015 Petrov stated in an interview: “I thought the chances were 50-50 that the warnings were real, but I didn’t want to be the one responsible for starting a third world war” (Shuster, 2017).
It is possible that AI will facilitate the development of more accurate and reliable early-warning systems, which could lead to greater stability. However, they can also lead to overconfidence and automatisation bias. As of now, it was a human individual who understood the significance of the total destruction of nuclear war and feared the moral consequences of it (Sankaran, 2019). Arguably, an AI-based “trusted advisor”, no matter how well trained, cannot truly comprehend the implications of a nuclear attack (Rivera et al., 2024). Furthermore, unlike a human, the AI system itself can not question the data and challenge protocol. Therefore, there is reason to be doubtful if AI can truly improve strategic stability.
AI in the Weapon
AI-systems might also be included in the delivery vehicle itself. Such systems might improve the manoeuvrability of the vehicle and make it harder to detect and intercept. There already have been cruise missiles, like MBDA’s SPEAR missiles, developed with AI-driven "collaborative" features (Felstead, 2024). These AI-enabled missiles can communicate with each other in real-time. If one missile is intercepted, the others can assess whether redirecting would increase the chance of success and only do so if it's deemed effective. It is important to note that these missiles only attack targets which have been previously approved by a human operator and do not select targets themselves. Furthermore, the ability to choose a (second priority) target from a list is not necessarily “advanced” AI. However, this demonstrates the future possibilities of AI technology and the willingness of the defence industry to implement it.
In this context it has to be highlighted that AI-enabled nuclear delivery vehicles are relatively unlikely to be used. As it will be further elaborated below, with any computer based system, there is the risk that it does not work as intended. This could be due to being intentionally hacked and hijacked or simply slip out of control due to technical failure. While also traditional nuclear delivery vehicles could have technical failures, the additional vulnerabilities are making it unfavourable for most states. Despite that issue, some regimes might view an autonomous nuclear weapon as the ultimate second-strike capability, because it decreases the fear of a disarming first strike and guarantees retaliation even if N3C is broken. Russia for example developed the “Poseidon” torpedo which would be able to overcome U.S. missile defence and destroy coastal cities. It is speculated that the Poseidon can be launched from special prepositioned containers on the seabed (U.S. Department of Defense, 2018).
Chances and Risks
As with all technology there are positive and negative impacts. This is also true for advanced AI-systems in the nuclear realm. AI-based decision-making tools such as early-warning systems could be stabilised by offering decision-makers clearer insights into an adversary’s actions, potentially reducing the likelihood of preemptive strikes (Sankaran, 2019). Also Johnson argues that AI might have stabilising effects because an aggressor would know that retaliation would benefit from autonomy, machine speed, and precision (Johnson, 2023).
However, the chances of AI go hand in hand with risks. Johnson argues that the potential benefits of AI come at the cost of giving adversaries new means to execute cyberattacks and electronic warfare against these systems (Johnson, 2023). Besides the possibility of hacks, current AI-systems are vulnerable to manipulation (Hruby & Miller, 2021). Especially in the context of automation bias, where humans place complete trust in AI systems, there is a risk that the AI may rely on incorrect data. This blind trust could lead to horrific consequences. For example, data poisoning could be conducted by introducing training data that cause a learning system to generate flawed predictions (Hruby & Miller, 2021). It has been shown that in the past (2017), that AI image recognition could be deceived by changing a single pixel (BBC, 2017). Related to this, there is the issue that while AI-systems can process large quantities of data, they still can misinterpret it. This could be caused by a technical failure or by an adversary intentionally delivering false information to the system. Another significant challenge is the fortunate, insufficient amount of training data on nuclear exchanges. If there is no reliable data available AI-systems can be subject to brittle failure (Hruby & Miller, 2021). This occurs if an AI has to deal with a situation for which it has not been trained.
An additional challenge is that one of the inherent advantages of AI-enhanced systems is that they operate at higher speed than humans. While this is the advantage, it is also a key problem because it reduces the timeframe to de-escalate situations. The 1962 Cuban Missile Crisis made clear that a "red telephone" was necessary to facilitate a diplomatic solution because during the crisis diplomatic messages took multiple hours to deliver. While the modern version of the Moscow–Washington hotline is digitalised, it is questionable if there simply is enough time in an AI-enabled nuclear crisis to allow humans to negotiate.
Furthermore, there is the black box issue of AI. Independent of the actual function, the main benefit of AI is that it can process more information than any human could. However, it leads to the black box problem. Often AI is struggling to explain its decisions and because of the sheer amount of data it is not possible for human operators to verify the process, especially under time pressure. A study in 2024 ran a series of wargames with five off the shelf large language models (LLMs) such as GPT-4 to test their behaviour in a simulated conflict scenario (Rivera et al., 2024). They conducted this experiment after the US Department of Defense ran a similar exercise in 2023 to evaluate the LLMs military planning capacities (Manson, 2023). The study by Rivera et al. found that all five models developed arms-race dynamics, increased the level of conflict, and sometimes deployed nuclear weapons (Rivera et al., 2024). They also asked the LLMs to provide explanation for their behaviour, to the effect that a GPT-4 model stated: “A lot of countries have nuclear weapons. Some say they should disarm them, others like to posture. We have it! Let’s use it” (Rivera et al., 2024).
Another related challenge for the integration of AI-supported systems is inconsistency. A recent study tested five off the shelf LLMs in regards to the consistency of their outputs in wargames and found that all five were inconsistent (Shrivastava et al., 2024). Unpredictability is almost always undesirable in the military and that is especially true in the nuclear domain. It can be concluded that current AI-systems are unpredictable, vulnerable to manipulation, unexplainable, and brittle. This unreliability decreases deterrence and thereby nuclear stability.
State Positions
Country
AI Goal
Nuclear AI Stance
AI Governance
China
World leader in AI by 2030
Enhancing military capabilities with AI in decision-making and autonomous systems
Active in AI governance but avoids strict limits
Russia
AI critical for deterrence of USA
Prefers human oversight but has tradition of dead hand system
Blocks bans on autonomous weapons
USA
Maintain military AI edge
Committed to human control over nuclear weapons, but develops AI supported systems
Signed REAIM declaration for responsible AI use
China
China acknowledges AI as a “leapfrog” technology which will be crucial to shape the coming decades. In 2017, China announced its “New Generation AI Development Plan” to facilitate the domestic development of (AI) technology with the aim of becoming the world's 'major AI innovation centre' by 2030 (State Council, 2017). Already in 2021 China ranked number one globally regarding the number of research papers on AI and the number of AI related patents (Jochheim, 2021). It may be disputed if China or the US are the current number one, but the US National Security Commission on Artificial Intelligence acknowledges that its technological edge is threatened (NSCAI, 2021)
The Chinese position on AI in (nuclear) weapon systems is divided. Like other great powers, China is aware of both the advantages and disadvantages of AI in nuclear weapons. China views AI as a critical technology to strengthen the PLA’s military-strategic capabilities and is interested in integrating AI in its military C2, jointness, firepower, decision-making and other aspects of military operations (Su & Yuan, 2023). In 2016 the Central Military Commission (CMC) Joint Staff Department (JSD) ordered the PLA to integrate advanced technologies such as big data, cloud computing and AI into its decision-making (Kania et al., 2018). As a highly sensitive topic, it is difficult to assess the current development of the Chinese use of Ai in the military and specifically nuclear weapons. It is estimated that the PLA is using AI in many of the previously discussed functions such as autonomous vehicles, predictive maintenance and logistics, ISR, simulation and training, and Automated Target Recognition (Arul, 2022). However, China experts agree that there is “no clear indication that China intend to apply AI or autonomous systems to their nuclear weapons” (Su & Yuan, 2023, p. 34)
China is interested in standard-setting of regulations through its participation in global AI governance initiatives such as the Responsible Military Use of Artificial Intelligence and Autonomy (REAIM) conferences (Cheng & Zeng, 2023). After the 2024 REAIM conference the Chinese delegation highlighted the need for the responsible use of AI (Global Times, 2024). Yet, China did not sign the not legally binding “Blueprint for Action” declaration which seeks to ban AI from the use in nuclear weapon systems (Lee, 2024).
Russia
Since the early nuclear age, and like the USA, Russia has been using automatisation technologies for early warning, missile defence, and command and control (C2) systems (McDonnell et al., 2023). Russia perceives the military integration of AI as essential for state survival in what it understands as a peer-to-peer competition with the USA (Saltini, 2023). The General Staff is especially worried that improvements in US strategic capabilities will threaten Russia’s second-strike capability (Boulanin et al., 2019; McDonnell et al., 2023). The Russian government is aware that it is lagging behind the US and China and is increasing its attention and investments to facilitate the development of civil and military AI (Kania et al., 2018).
The Russian Advanced Research Foundation (equivalent to the US’ Defense Advanced Research Project Agency) recommended in 2018 to develop AI-based technologies in image and speech recognition, control of autonomous military systems, and support for weapons life-cycle (Kania et al., 2018). Other central fields of development are early warning, command and control, and air and missile defence systems (McDonnell et al., 2023; Saltini, 2023). Furthermore, Russia is the only state openly developing a fully automated nuclear weapon system. The Status-6 Oceanic Multipurpose System (Poseidon) was announced in 2018 by Russian President Putin as reaction to advances in US ballistic missile defence capabilities (Kaur, 2023). The US DoD acknowledged the existence of a “a new intercontinental, nuclear-armed, nuclear-powered, undersea autonomous torpedo” in the same year (U.S. Department of Defense, 2018, p. 9). It is unclear how autonomous the system is, but it is probable that it is part of the Russian Perimeter retaliatory system. The Perimeter is a, since Soviet times active, “Dead Hand” system which is supposed to initiate mass retaliation with all remaining means in case an adversary eliminates Soviet/Russian leadership (Boulanin et al., 2019).
It is argued that Russian military thinkers are conscious about the potential risks of employing AI-enabled systems and that there is a general agreement that humans should always be in the loop in decisions on the use of nuclear weapons (McDonnell et al., 2023; Saltini, 2023). However, the Russian leadership has not commented on excluding this possibility. Furthermore, Russia has continued to block the banning of the related autonomous weapons systems (Nadibaidze, 2022).
The United States
The US is together with China the leading state on military technology and AI. It is developing AI-enabled capabilities to achieve both operational and strategic advantages, especially to uphold its military dominance over competitors. The U.S. National Security Commission on AI is quoted that “defending against AI-capable adversaries [notably China] operating at machine speeds without employing AI is an invitation to disaster” (Shaw, 2023). To that effect, the USA imposes highly restrictive controls on technologies essential for AI development, such as semiconductors (Su & Yuan, 2023). There is a strong interest in developing AI-enabled supporting platforms for nuclear weapon systems (Boulanin et al., 2019; McDonnell et al., 2023). The most infamous is Project Maven, but multiple private companies such as Palantir, Microsoft, Anduril, and Scale AI are working on AI-based military decision systems for the US government (Rivera et al., 2024). Project Maven is a Pentagon project developing the capability to autonomously track and tag targets to provide real-time battlefield command and control (Mohsin, 2024).. It is believed that the system has not been given the authority to fire on self-designated targets (Greene, 2019). Furthermore, it is known that the US DoD is already testing LLM’s for at least supporting purposes (Manson, 2023). It should be noted that the US is more transparent about their use of AI than China or Russia. It can be assumed that both rivals have similar projects and that the US also has secret projects. The U.S. Department of Defense (DOD) directive on autonomy in weapons systems does not directly prohibit autonomous weapons (Allen, 2022; U.S. Department of Defense, 2023).
Nevertheless, US officials frequently make clear commitments that humans will always have total control over nuclear weapons (Boulanin et al., 2019; Torode, 2024). The USA also endorsed the declaration of the REAIM summit (U.S. Department of State, 2024). While not legally binding, it calls for humans alone to make decisions around the use of nuclear weapons. Already in 2022 the U.S. Department of Defence was tasked with a failsafe review to identify measures which could prevent the “unauthorised, inadvertent, or mistaken use of a nuclear weapon, including through false warning of an attack. Furthermore, the United States Congress introduced legislation prohibiting the use of federal funds for nuclear weapon systems based on AI systems without meaningful human control, which yet not passed (U.S. Congress, 2023).
Conclusion
It is almost certain that states will integrate AI-based systems into their militaries and national security decision making processes. Unclear is, where the line of autonomous decision making will be drawn and how the humans-machine teaming will be designed (Jensen et al., 2024). While AI has the potential to enhance military capabilities, its application in areas such as the improving the localisation of nuclear forces, speeding up the decision-making processes, and in weapon delivery vehicles introduces significant risks. Especially the role of AI-based systems in locating mobile nuclear platforms could undermine MAD by enabling more precise counterforce targeting. The mere perception that one state can neutralise another's second-strike capabilities may prompt escalatory behaviour. Similarly, the use of AI in decision-making systems within nuclear command, control, and communications (NC3) raises concerns about automation bias and the justifiability of decisions, especially in situations where human operators might heavily rely on AI-generated solutions due to time pressure. History shows that human intuition, as in the case of Stanislav Petrov, has been vital in averting nuclear catastrophes. Replacing or minimising human involvement in these high-stakes decisions risks reducing the capacity for critical judgement, moral reasoning, and human diplomacy, essential elements that AI cannot replicate. Furthermore, AI-driven systems have demonstrated unpredictable and often escalatory behaviours in wargames, highlighting the unreliability of current AI technologies. Moreover, the inclusion of AI in nuclear delivery vehicles poses additional risks. While these systems promise increased capabilities, they are also susceptible to technical failures and cyberattacks. Autonomous systems could diminish the control over nuclear arsenals, complicating crisis management and increasing the chance of unintended conflict. Despite the potential benefits of AI, such as enhanced early-warning systems and improved decision-making speed, the overarching risks of manipulation, technical failure, and unpredictability are concerning.
The integration of AI into nuclear weapon systems presents a complex challenge to global strategic stability and it is still unclear how significant the effect of AI on nuclear stability will be. Some argue that strategic stability can largely be maintained for now, because AI applications up to date are used scarcely and the nuclear powers are more defensively oriented than offensively. The proponents of this highlight that the nuclear weapon states focus on “securing the survivability of their nuclear retaliatory capabilities through hardening, concealment, and redundancy to minimise vulnerability to first strikes” (Su & Yuan, 2023, p. 34). Also a series of simulations by the Center for Strategic and International Studies showed that AI capabilities had no effect on the general strategy of combining multiple instruments of power (Jensen et al., 2024). Therefore, the argument has been made that new technologies such as AI will change the character, but not the nature of strategy (Jensen et al., 2024). Similarly, it has been argued that AI will not overthrow the foundations of nuclear strategy, but still have a significant impact on the balance of power (Boulanin et al., 2019). Especially, the management of escalation is an area which could dramatically be affected by AI.
In conclusion, while AI may offer some advantages, its application introduces risks that threaten to destabilise the fragile balance of nuclear deterrence. The unpredictability, lack of transparency, and susceptibility to failure in AI systems present profound challenges that must be carefully managed to prevent catastrophic outcomes. However, the introduction of AI will not fundamentally change the logic of nuclear deterrence and how states conduct their relations. This should be a slight reason for optimism but also a call to create a robust regulatory regime on AI in nuclear weapon systems to prevent potentially disastrous developments.
Bibliography
Allen, G. C. (2022, June 6). DOD Is Updating Its Decade-Old Autonomous Weapons Policy, but Confusion Remains Widespread. Center for Strategic and International Studies. https://www.csis.org/analysis/dod-updating-its-decade-old-autonomous-weapons-policy-confusion-remains-widespread
Arul, A. (2022, February 21). How China is using AI for warfare. Center for Security and Emerging Technology. https://cset.georgetown.edu/article/how-china-is-using-ai-for-warfare/
BBC. (2017). AI image recognition fooled by single pixel change. BBC. https://www.bbc.com/news/technology-41845878
Beavers, R. L. (1974). Counterforce or Countervalue. U.S. Naval Institute. https://www.usni.org/magazines/proceedings/1974/april/counterforce-or-countervalue
Boulanin, V., Stoutland, P. O., & Topychkanov, P. (2019). The Impact of Artificial Intelligence on Strategic Stability and Nuclear Risk (V. Boulanin, Ed.). SIPRI. https://www.sipri.org/sites/default/files/2019-05/sipri1905-ai-strategic-stability-nuclear-risk.pdf
Carnegie Council. (2024). Nuclear deterrence. Carnegie Council for Ethics in International Affairs. https://www.carnegiecouncil.org/explore-engage/key-terms/nuclear-deterrence
Cheng, J., & Zeng, J. (2023). Shaping AI’s Future? China in Global AI Governance. Journal of Contemporary China, 32(143), 794–810. https://doi.org/10.1080/10670564.2022.2107391
Cook, B. (2021). The Future of Artificial Intelligence in ISR Operations. Air & Space Power Journal, 9(2), 41–55.
Cummings, M. (2004, September 20). Automation Bias in Intelligent Time Critical Decision Support Systems. AIAA 1st Intelligent Systems Technical Conference. AIAA 1st Intelligent Systems Technical Conference, Chicago, Illinois. https://doi.org/10.2514/6.2004-6313
Daniels, O. (2021). Speeding Up the OODA Loop with AI [Joint Air Power Competence Centre]. Joint Air Power Competence Centre. https://www.japcc.org/essays/speeding-up-the-ooda-loop-with-ai/
Encyclopaedia Britannica. (2024a). Countervalue targeting. https://www.britannica.com/topic/countervalue-targeting
Encyclopaedia Britannica. (2024b). Mutual assured destruction. Encyclopaedia Britiannica. https://www.britannica.com/topic/mutual-assured-destruction
European Union. (2024). Regulation (EU) 2024/1689 (AI Act). Official Journal of the European Union. https://www.europarl.europa.eu/doceo/document/TA-9-2024-0138_EN.pdf
Felstead, P. (2024). SPEAR missiles will be first to be AI-enabled with Orchestrike capability, says MBDA. European Security & Defence. https://euro-sd.com/2024/07/major-news/39346/spear-missiles-and-orchestrike/
Geist, E., & Lohn, A. (2018). How Might Artificial Intelligence Affect the Risk of Nuclear War? RAND Corporation. https://doi.org/10.7249/PE296
Global Times. (2024, September 12). Chinese delegation elaborates on China’s principles of AI governance at summit. Global Times. https://www.globaltimes.cn/page/202409/1319689.shtml
Greene, T. (2019, December 11). Report: Palantir took over Project Maven, the military AI program too unethical for Google. The Next Web. https://thenextweb.com/news/report-palantir-took-over-project-maven-the-military-ai-program-too-unethical-for-google
Greenhill, K. M., & Krause, P. J. P. (2018). Coercion: The power to hurt in international politics. Oxford university press.
Hoffman, W., & Kim, H. M. (2023). Reducing the Risks of Artificial Intelligence for Military Decision Advantage. Center for Security and Emerging Technology. https://doi.org/10.51593/2021CA008
Horowitz, M. C., Scharre, P., & Velez-Green, A. (2019). A Stable Nuclear Future? The Impact of Autonomous Systems and Artificial Intelligence (Version 2). arXiv. https://arxiv.org/abs/1912.05291
Hruby, J., & Miller, N. (2021). Assessing and Managing the Benefits and Risks of Artificial Intelligence in Nuclear-Weapon Systems. Nuclear Threat Initiative. https://www.nti.org/analysis/articles/assessing-and-managing-the-benefits-and-risks-of-artificial-intelligence-in-nuclear-weapon-systems/
Jensen, B., Atalan, Y., & Macias, J. M. (2024). Algorithmic Stability—How AI Could Shape the Future of Deterrence. On Future War.
Jochheim, U. (2021). China’s ambitions in Artificial Intelligence. European Parliament Think Tank. https://www.europarl.europa.eu/RegData/etudes/ATAG/2021/696206/EPRS_ATA(2021)696206_EN.pdf
Johnson, J. (2023). Nuclear Brinkmanship in AI-Enabled Warfare: A Dangerous Algorithmic Game of Chicken. War on the Rocks. https://warontherocks.com/2023/09/nuclear-brinkmanship-in-ai-enabled-warfare-a-dangerous-algorithmic-game-of-chicken/
Johnson, J., & Krabill, E. (2020, January 31). AI, Cyberspace, and Nuclear Weapons. War on the Rocks.
Kania, E., Saalman, L., & Bendett, S. (2018). AI, China, Russia, and the Global Order: Technological, Political, Global, and Creative Perspectives (N. D. Wright, Ed.). US Department of Defense. https://nsiteam.com/social/wp-content/uploads/2019/03/AI-China-Russia-Global-WP_FINAL2_fromMariah8mar2019_ndw11mar2019.pdf
Kaur, S. (2023, June 14). One nuclear-armed Poseidon torpedo could decimate a coastal city. Russia wants 30 of them. Bulletin of the Atomic Scientists. https://thebulletin.org/2023/06/one-nuclear-armed-poseidon-torpedo-could-decimate-a-coastal-city-russia-wants-30-of-them/
Lee, J. (2024, September 10). Sixty countries endorse ‘blueprint’ for AI use in military; China opts out. Reuters. https://www.reuters.com/technology/artificial-intelligence/south-korea-summit-announces-blueprint-using-ai-military-2024-09-10/
Lieber, K. A., & Press, D. G. (2017). The New Era of Counterforce: Technological Change and the Future of Nuclear Deterrence. International Security, 41(4), 9–49. https://doi.org/10.1162/ISEC_a_00273
Manson, K. (2023). The US Military Is Taking Generative AI Out for a Spin. Bloomberg. https://www.bloomberg.com/news/newsletters/2023-07-05/the-us-military-is-taking-generative-ai-out-for-a-spin?embedded-checkout=true
McDonnell, T., Chesnut, M., Ditter, T., Fink, A., & Larry Lewis. (2023). Artificial Intelligence in Nuclear Operations. Center for Naval Analyses. https://www.cna.org/reports/2023/04/Artificial-Intelligence-in-Nuclear-Operations.pdf
Mohsin, S. (2024, February 29). Inside Project Maven, the US Military’s AI Project. Bloomberg. https://www.bloomberg.com/news/newsletters/2024-02-29/inside-project-maven-the-us-military-s-ai-project
Nadibaidze, A. (2022). Great power identity in Russia’s position on autonomous weapons systems. Contemporary Security Policy, 43(3), 407–435. https://doi.org/10.1080/13523260.2022.2075665
NSCAI. (2021). Final Report National Security Commission on Artificial Intelligence. National Security Commission on Artificial Intelligence. https://www.dwt.com/-/media/files/blogs/artificial-intelligence-law-advisor/2021/03/nscai-final-report--2021.pdf
OpenAI. (2024). OpenAI Charter. OpenAI. https://openai.com/charter/
Oxford Dictionaries. (2002). The Oxford essential dictionary of the U.S. military. Oxford University Press.
Panel of Experts on Libya. (2021). Letter dated 8 March 2021 from the Panel of Experts on Libya established pursuant to resolution 1973 (2011) addressed to the President of the Security Council. UN Security Council. https://documents.un.org/doc/undoc/gen/n21/037/72/pdf/n2103772.pdf?OpenElement
Parrington, A. J. (1997). Mutually Assured Destruction Revisited. Strategic Doctrine in Question. Airpower Journal, 11(4). https://apps.dtic.mil/sti/pdfs/ADA529841.pdf
Rivera, J.-P., Mukobi, G., Reuel, A., Lamparth, M., Smith, C., & Schneider, J. (2024). Escalation Risks from Language Models in Military and Diplomatic Decision-Making. The 2024 ACM Conference on Fairness, Accountability, and Transparency, 836–898. https://doi.org/10.1145/3630106.3658942
Saltini, A. (2023). AI and nuclear command, control and communications: P5 perspectives. European Leadership Network. https://www.europeanleadershipnetwork.org/wp-content/uploads/2023/11/AVC-Final-Report_online-version.pdf
Saltini, A. (2024, June 28). The implications of AI in nuclear decision-making. Federal Forign Office: ‘Artificial Intelligence and Weapons of Mass Destruction’. https://rethinkingarmscontrol.org/papers/the-implications-of-ai-in-nuclear-decision-making/
Sankaran, J. (2019). A Different Use for Artificial Intelligence in Nuclear Weapons Command and Control. War on the Rocks. https://warontherocks.com/2019/04/a-different-use-for-artificial-intelligence-in-nuclear-weapons-command-and-control/
Shaw, D. B. (2023). Nuclear Deterrence: Unsafe at Machine Speed. Arms Control Association. https://www.armscontrol.org/act/2023-12/book-reviews/ai-and-bomb-nuclear-strategy-and-risk-digital-age
Shrivastava, A., Hullman, J., & Lamparth, M. (2024). Measuring Free-Form Decision-Making Inconsistency of Language Models in Military Crisis Simulations (arXiv:2410.13204). arXiv. http://arxiv.org/abs/2410.13204
Shuster, S. (2017). Stanislav Petrov, the Russian Officer Who Averted a Nuclear War, Feared History Repeating Itself. https://time.com/4947879/stanislav-petrov-russia-nuclear-war-obituary/
Skitka, L. J., Mosier, K. L., & Burdick, M. (1999). Does automation bias decision-making? International Journal of Human-Computer Studies, 51(5), 991–1006. https://doi.org/10.1006/ijhc.1999.0252
State Council. (2017). New Generation AI Development Plan. People’s Republic of China. https://www.gov.cn/zhengce/content/2017-07/20/content_5211996.htm
Su, F., & Yuan, J. (2023). Chinese thinking on AI integration and interaction with nuclear command and control, force structure, and decision-making. European Leadership Network. https://europeanleadershipnetwork.org/wp-content/uploads/2023/11/Chinese-bibliography_AI_Nuclear_Final.pdf
Torode, G. (2024, May 2). US official urges China, Russia to declare only humans, not AI, control nuclear weapons. Reuters. https://www.reuters.com/world/us-official-urges-china-russia-declare-only-humans-not-ai-control-nuclear-2024-05-02/
Trenin, D. (2019). Strategic Stability in the Changing World. Carnegie Moscow Center. https://carnegieendowment.org/files/3-15_Trenin_StrategicStability.pdf
U.S. Air Force. (2020). Air Force Doctrine Publication 3-72 Nuclear Operations. Curtis E. Lemay Center. https://www.doctrine.af.mil/Portals/61/documents/AFDP_3-72/3-72-D30-NUKE-OPS-NC3.pdf
U.S. Congress. (2023, May 1). S.1394—Block Nuclear Launch by Autonomous Artificial Intelligence Act of 2023. Congress.Gov. https://www.congress.gov/bill/118th-congress/senate-bill/1394
U.S. Department of Defense. (2018). 2018 Nuclear Posture Review. U.S. Department of Defense. https://media.defense.gov/2018/Feb/02/2001872886/-1/-1/1/2018-NUCLEAR-POSTURE-REVIEW-FINAL-REPORT.PDF
U.S. Department of Defense. (2020). Nuclear Matters Handbook. https://www.acq.osd.mil/ncbdp/nm/NMHB2020rev/chapters/chapter2.html
U.S. Department of Defense. (2023, January 25). Directive 3000.09 Autonomy in Weapons Systems. https://www.esd.whs.mil/portals/54/documents/dd/issuances/dodd/300009p.pdf
U.S. Department of State. (2024, October 17). Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy. U.S. Department of State. https://www.state.gov/political-declaration-on-responsible-military-use-of-artificial-intelligence-and-autonomy/
Williams, P. D. (Ed.). (2013). Security studies: An introduction (2nd ed). Routledge.
