Algorithmic Deterrence: U.S.-China Arms Race
...
...

Main Question: How does the militarization of AI by the U.S. and China contribute to global instability? Argument: The AI arms race between the U.S. and China, especially in autonomous weapon systems (AWS), increases the risk of conflict escalation due to unpredictability, biased data, and reduced human oversight. Conclusion: International governance and responsible AI principles are essential to prevent global insecurity.
MLA
I'm a paragraph. Click here to add your own text and edit me. It's easy.
CHIGACO
I'm a paragraph. Click here to add your own text and edit me. It's easy.
APA
I'm a paragraph. Click here to add your own text and edit me. It's easy.
Algorithmic Deterrence: How Military AI Fuels Global Instability in the U.S.-China Arms Race
Introduction:
Artificial intelligence and its application in defense have been developing rapidly, changing the global security landscape and adding new layers of complexity that could increase global insecurity and pose serious threats to international peace and stability. This essay argues that integrating AI into autonomous weapon systems (AWS) and military strategies by both the United States and China creates unpredictable risks and the potential escalation of conflicts. Unlike traditional military technologies, AI enables autonomy and can accelerate and enhance decision-making processes by processing large amounts of data and selecting useful information in ways humans cannot (Schraagen, 2024).
The militarization of AI promises great potential, but the unintended consequences are even greater. To combat the significant risks created by the proliferation of military AI, it is necessary to establish global governance and rules that nations must follow. Principles of responsible AI must be integrated to prevent the potentially disastrous consequences of military AI. Protective frameworks and governance mechanisms are the best way to mitigate the risks posed by this technology (Garcia, 2023). The AI arms race is not just a technological competition; it is a destructive force that demands international governance to preserve global security and peace.
Following this brief introduction, there will be an overview of the AI arms race between the US and China, focusing on their significant investments in military AI, particularly in AWS. The third part of this paper will explore the unpredictability and risks associated with the deployment of AWS and the potential for conflict escalation that could impact global security. Finally, I will discuss the need for global governance as a means to de-escalate the threat posed by AWS and the militarization of AI by both countries, while also advocating for the implementation of responsible AI principles to reduce the risk of global instability.
The AI Arms Race Between the U.S. and China:
II.I. The United States:
The military AI race for the United States began in 1983 when the American government invested one billion dollars in strategic computing (Haner & Garcia, 2019). In particular, autonomy has been a key component in the U.S. national security strategy since Directive 3000.09 from the Department of Defense (DoD) was announced in 2012. This directive was the very first to permit semi-autonomous systems to strike down targets that had previously been identified by human operators. It also allowed for fully autonomous weapons to choose and engage with targets after a senior-level authorization from the DoD (Haner & Garcia, 2019). Six years later, the U.S. National Defense Strategy put particular focus on AI development in defense, and in 2020, the National Defense Authorization Act (NDAA) cited AI exactly eleven times (Hunter et al., 2023). These events prompted the U.S. to intensify its efforts to advance artificial intelligence in defense. According to the Congressional Research Service, all the branches of the U.S. military are actively working to integrate AI into semi-autonomous and autonomous systems, including fighter jets, drones, land-based vehicles, and naval ships. The creation and establishment of the Joint Artificial Intelligence Center (JAIC) in 2018 was another milestone, intending to speed up the deployment of these AI technologies and increase its influence across the DoD for "joint force advantage." The JAIC took the lead to implement AI into all branches of the military, and as a result, the Army allocated $6.5 million for AI-related training, including simulations and virtual reality, according to the Department of Defense’s 2019 budget report (Hunter et al., 2023). In 2021, $1.5 billion of the DoD budget was invested toward microelectronics and 5G infrastructure, which are key components to the advancement of AI technologies. Another $1.7 billion was put towards autonomous systems to improve their mobility and speed in both semi-autonomous and fully autonomous vehicles, but also to enhance human-machine collaboration. Furthermore, $800 million was dedicated to AI initiatives such as AI Pathfinders, the Joint Artificial Intelligence Center (JAIC), and Project Maven (Hunter et al., 2023).
Some examples of autonomous weapon systems utilized by the U.S. can be found in the Air Force, which allocated $87 million in 2019 for AI-based wargaming and training. AI-powered autonomous piloting systems are being developed with the goal of independent aircraft operation and to even assume full control in missions. μZero, for example, was a program designed to fly a U-2 stealth aircraft, and it successfully did so in 2019. Similarly, the X47-B program, an autonomous piloting system, was meant to perform autonomous landings as well as aerial refueling, though later discontinued due to performance and funding issues. Despite shortcomings, the American Air Force and Navy continue to work together on AI-enabled weapons like the Long-Range Anti-Ship Missile (LRASM), which incorporates autonomous targeting capabilities (Hunter et al., 2023). The Navy and Marine Corps have also invested heavily in integrating AI for both training and combat purposes. Through the Navy’s rapid prototype development program, a $49 million investment was made to add AI into combat systems, resulting in new submarine combat assets. The U.S. Navy has also researched AI-guided missile systems, which produced Tomahawk Anti-Ship Missiles (TASM) and LRASM stealth anti-ship cruise missiles. There is also Cognitive Lethal Autonomous Weapons Systems (CLAWS), which is a prototype submarine that is being developed with onboard AI authorized to detect, assess, and respond to threats autonomously using 12 torpedo tubes, which is expected to have a significant impact on marine warfare (Hunter et al., 2023).
II.II. China:
China, on the other hand, has emerged as a solid competitor to the United States in the militarization of AI and the development of AWS. The Chinese State Council has stated its goal to become a world leader in artificial intelligence by 2030, with a particular focus on AI integration into its military. The competition with the United States appears to be a key factor in shaping its AI strategy (Hunter et al., 2023). China’s Next Generation Artificial Intelligence Development Plan has also publicly declared its intention to utilize AWS on the battlefield. China also has a robust budget for weapon development, with an estimated annual budget of $250 billion. It is also heavily investing in civil-military fusions, which the Chinese State Council expects their AI industry to be worth $59 billion by 2025 and to reach $150 billion by 2030. Furthermore, between 2013 and 2018, China accounted for an estimated 60% of global funding for AI projects, which is double the amount that the U.S. invested in financing AI projects during the same period (Haner & Garcia, 2019). So far, it is known that China has been investing in and testing several unmanned vehicles and even working towards the development of missile technology integrated with AI. The Chinese government is advancing AI integration across diverse military platforms, which include the development of autonomous robots to increase weapon effectiveness and tactics to create new capabilities. These also include the "autonomous at rest" feature for data processing and decision support (Hunter et al., 2023). Similar to the United States, China seems to have avoided openly deploying weapon systems in its military that can fully operate without human involvement. In particular, to systems that are able to target, identify, and even fire on their own accord. However, just like the U.S., many of China’s existing systems could be quickly adapted to function autonomously with just a few software adjustments (Hunter et al., 2023).
China, much like the U.S., has developed its own set of autonomous vehicles for air, land, and sea. The Sharp Claw I/II, for example, are among the limited AI-powered technologies China has created to strengthen its ground troops. These are similar to drones but compact and tank-styled, which require very little human intervention, intending to be armored support units for ground operations. Regarding the Air Force, China has heavily invested in the development of unmanned aerial vehicles (UAVs), stealth aircraft, and cruise missiles with capabilities for automatic launching and targeting (Hunter et al., 2023). There has even been a public demonstration of the DF-ZF stealth craft, which can glide at hypersonic speeds and features automated missile systems. This craft was built specifically to bypass American defense systems, once again reflecting the Chinese ambition to compete against the U.S. militarily and technologically. More alarmingly, China is already exporting military drones to strategic allies in the Middle East. One example is Ziyan’s Blowfish A3, which has been advertised to carry out complex combat tasks autonomously, such as precise timing detection, range-limited reconnaissance, and targeted strikes (Hunter et al., 2023).
Types of AI Military Systems
U.S.
China
Autonomous piloting systems (μZero, X47-B)
Sharp Claw I/II autonomous ground vehicles
AI-enabled missiles (LRASM, Tomahawk)
UAVs and stealth aircraft with AI
Cognitive Lethal Autonomous Weapons System (CLAWS) submarine
DF-ZF hypersonic stealth craft with automated missiles
AI training and simulations
Exported autonomous drones (Blowfish A3)
AWS’ Unpredictability and Risks to Global Security:
The discussions surrounding autonomous weapon systems are about the extent to which AI takes over; it is about the fear of humans no longer being in control of the actions of the AI. Imperfections in the system can occur due to biased data and flawed modeling assumptions (Schraagen, 2024). The result of these issues can be devastating, particularly when the human element in these systems is removed. The more autonomous a system becomes, the less human involvement is required. This often limits human input to the early stages of a mission, which can create a disconnect from the mission’s outcome. The consequences may very likely violate international legal standards and even the ethical and moral principles we live by (Garcia, 2023).
Trust is one of the first challenges around AWS and the general applications of AI in the military. AI systems need to be trustworthy for people to utilize them, and trust stems from appropriate development, testing, and validation methods in each stage of production. AI also needs to be explainable to be trusted by users. Explainability refers to “the capability of an AI agent to produce details or reasons to make its functioning clear or easy to understand” (Schraagen, 2024). However, many of the AI systems available today, such as deep neural networks, are complex to explain or comprehend, making it difficult for regulators and developers to understand and check the results produced by the system.
On the other hand, there is the issue of biased data, and because AI relies on large amounts of data to work, it could produce unfair results. For example, AI might work well for a group of white men but poorly for a group of black women. The issue is, of course, that the world is in instant change and data can quickly become outdated, therefore making AI predictions less reliable. This phenomenon is also known as the distributional drift, which is exacerbated when AI is militarized, considering that the situations will be more complex, and there is only a limited amount of data available, which can exacerbate the problem of bias and discrimination. This is why it is even more important to ensure the quality of the data in the military and to strive for unbiased data. This is not only crucial from an ethical point of view but also affects performance, as a biased AI system will be ineffective in particular situations (Schraagen, 2024). The consequences of AI ethics failures can vary depending on the context. For example, in commercial facial recognition, such a failure might lead to racial bias, whereas in military computer vision systems, it could result in a misidentified target being attacked by a military force (Chapas, 2024).
Returning to the topic of human control, there is an ongoing debate among scholars, organizations, and governments about autonomous weapon systems (AWS). For example, the US Department of Defense prefers the term “appropriate levels of human judgment” instead of “meaningful human control.” They argue that “control” is too complex to clearly define and is not always required, unlike compliance with the laws of war, which is mandatory. In response, Human Rights Watch argues that “control” is a better term because it includes both mental judgment and physical action, which are necessary to prevent AWS from causing irreparable harm. It is also important to note that control over AWS decisions is usually shared among multiple people and occurs at different stages of the operation. Depending on the type of AWS, humans may only be able to intervene after the system is activated to stop it before it causes damage, although it is important to note that this is not always the case (Schraagen, 2024).
Scholars have argued that the competition between great powers is a ‘folly that must end’ as it distracts and takes resources away from the actual important global issue that concerns society. Genuine security can be achieved by just one state; common good governance is the way forward for security. The idea of pushing towards a global common good is a way to combat the proliferation of military AI, which is making the world less safe. In fact, more militarization does not enhance protection but puts future generations in danger. We have seen this happen before with nuclear weapons, and the cycle repeats with AI-enabled weapons. The growing use of AI for military purposes increases the risk to global stability because the machine learning used by AI systems can behave unpredictably, as previously discussed. This raises the question of how global peace can be maintained when AI will be able to make choices between life and death that result in wars? (Garcia, 2023).
While there is ongoing debate about the nature and extent of AI competition in the military, I argue that an AI arms race is already underway, with countries like the United States and China actively striving for dominance. The international balance of power (BOP) is significant when discussing this technological arms race and what it means for global stability. BOP refers to “how the distribution of economic and military power between states affects international relations.” It implies that when states gain more power relative to others, they also enhance their ability to exert political, economic, and military influence. As a consequence, fluctuations in relative power between the US and China shape their capacity to assert influence internationally (Hunter et al., 2023). When discussing artificial intelligence in the military context, many researchers emphasize that AI can significantly impact not only their military strength but also a state's ability to project power at regional and global levels. In other words, AI will not only transform military strategies but also alter global power dynamics, and could even escalate rivalries among major powers, such as the US and China. The purpose is not for the AI system to be free of flaws or to even replace humans, but just powerful enough to give states that strategic edge compared to their adversaries (Hunter et al., 2023).
One notable evidence in the already shifting world order disrupted by the introduction of AI into the military can be seen in the declining levels of global peace. According to the Global Peace Index, global peace levels have fallen by 2.5% since 2008, and military spending has soared; the budget on military AI alone is projected to reach $11.6 billion in 2025 (Garcia, 2023). The weaponization of AI is a looming threat to destabilize world peace and frameworks, and uncertainty grows over whether international laws can catch up to the rapid development of AI. Effective regulation is needed to ensure peace and the continuation of a stable global order.
The Need for Global Governance:
Due to the fast development of AI, many agencies and organizations have been working towards creating a set of AI ethical principles and guidelines. In an analysis including 84 documents on AI principles, it was found that transparency, justice and fairness, non‑maleficence, responsibility, and privacy were the most frequently mentioned. While some of these documents have been criticised for being too abstract to the point of being impractical, it is nevertheless a good starting point (Schraagen, 2024). Regarding AWS in particular, while there have been efforts to establish global regulatory frameworks, these have faced considerable obstacles. Some scholars argue that creating such frameworks is feasible if key actors were to initiate legal processes outside the UN Convention on Certain Conventional Weapons (CCW) (Bode et al., 2023).
In the case of China, its delegation has voiced concerns at the Group of Governmental Experts (GGE) meetings under the CCW. They expressed concern about the risks presented by AWS and advocated for an agreement prohibiting systems that can select targets without the possibility of human intervention. However, it is important to point out that their position is not only backed by ethical concerns but also by geopolitical ones. The Chinese government has expressed worry that technologically advanced nations might exploit AWS to dominate or provoke conflict. Despite their initial position against AWS, China has not implemented any domestic legislation to prohibit AWS, and there is little transparency regarding whether its military has considered such measures. In this aspect, China, much like the US, continues to develop unmanned systems for land, sea, and air operations, and continues to push towards automation, as previously discussed. It is clear that despite the call for regulation, the ongoing investments in AI technology in the military make it difficult to enforce a ban on AWS (Bode et al., 2023).
The United States has consistently argued against banning autonomous weapon systems, instead emphasizing their potential to enhance compliance with international humanitarian law. They argue that AWS can reduce civilian casualties through more precision and can improve battlefield oversight by commanders. The US appears to support rather than stigmatize innovation that aligns with the goals of the CCW and has outlined guidelines to ensure AWS functions according to its operators' intent. The US also openly opposes any legally binding regulations on AWS, compared to China, which, at the very least, has been open to regulations (Bode et al., 2023). There is no doubt that both countries are investing in weaponizing AI and perceive it as a strategic asset in the context of competition. While a global framework at this time seems unlikely, it is still the best course of action to maintain global stability and regulate military AI.
Conclusion:
The integration of artificial intelligence into military strategies by the United States and China is intensifying global instability. As both nations continue to develop and deploy autonomous weapon systems, the risks of miscalculation, biased decision-making, and unintended conflict escalation grow. These systems reduce human oversight and rely on data-driven models that can be flawed, outdated, or biased, therefore raising serious ethical and legal concerns. The unpredictability of AI behavior makes it difficult to trust militarized AI systems and even more difficult to regulate these technologies. This arms race is not only about technological superiority but also poses a profound threat to international peace and security. To reduce the risks, international governance frameworks are urgently needed, and the promotion of responsible AI principles is the first step toward managing the military use of AI.
Bode, I., Huelss, H., Nadibaidze, A., et al. (2023). Prospects for the global governance of autonomous weapons: Comparing Chinese, Russian, and US practices. Ethics and Information Technology, 25(1), 5. https://doi.org/10.1007/s10676-023-09678-x
Chapa, J. (2024). Military AI Ethics. Journal of Military Ethics, 23(3/4), 306–321. https://doi.org/10.1080/15027570.2024.2439654
Garcia, D. (2023). The AI military race: Common good governance in the age of artificial intelligence. Oxford University Press. Available at: https://research.ebsco.com/linkprocessor/plink?id=a0a04cbe-58f1-347a-a0b5-e584fed215ed
Haner, J., & Garcia, D. (2019). The artificial intelligence arms race: Trends and world leaders in autonomous weapons development. Global Policy, 10(3), 331–337. https://doi.org/10.1111/1758-5899.12713
Hunter, L. Y., Albert, C. D., Henningan, C., & Rutland, J. (2023). The military application of artificial intelligence technology in the United States, China, and Russia and the implications for global security. Defense & Security Analysis, 39(2), 207–232. https://doi.org/10.1080/14751798.2023.2210367
Schraagen, J. M. (2024). Responsible use of AI in military systems. Taylor & Francis. Available at: https://research.ebsco.com/linkprocessor/plink?id=85cdd54b-e3b4-3313-917b-23b25da9933d
