top of page

Global Competition over AI governance models

...

...

Global Competition over AI governance models

How do the US, EU, and China’s differing AI governance models shape global competition and the emerging international AI order?

Divergent models, US deregulation, EU rights-based regulation, and China’s state-centric approach create geopolitical competition but also drive a networked governance system.

AI governance is evolving into a cooperative yet fragmented global architecture, where interoperability and coordination are crucial to avoid instability and ensure effective governance.

MLA

I'm a paragraph. Click here to add your own text and edit me. It's easy.

CHIGACO

I'm a paragraph. Click here to add your own text and edit me. It's easy.

APA

I'm a paragraph. Click here to add your own text and edit me. It's easy.

Convertini

Giulia

Convertini

Fellow

Global Competition over AI governance models

By Giulia Convertini

Introduction: AI Governance as a Contest Over Global Order


Artificial Intelligence (AI), as a general-purpose technology, is reshaping economic power, security, and state capacity, sending shock waves across the world and reframing global competition. AI has become a geopolitical game-changer as we see the United States and China investing billions in AI innovation, from start-ups and research labs to advanced semiconductors and critical digital infrastructure, racing ahead in both civilian and defense applications of AI. Europe, by contrast, has focused on ethical aspects, prioritizing human-centric and trustworthy AI models.

The United States, the European Union, and China have each developed distinct AI governance models that reflect their political systems, strategic priorities, and normative visions for the international order. These contrasting approaches, namely America’s innovation‑driven deregulation, Europe’s rights‑based regulatory framework, and China’s state‑centric, security‑oriented model, highlight a widening transatlantic divide and a deeper global contest over who shapes the rules of the digital age. This article examines these competing models and situates them within a broader shift toward a more pluralistic and networked global AI governance architecture.


The Transatlantic Divide on AI governance: Between the EU AI Act and the so-called “Brussels effect”


The European Union introduced the first-ever legal framework on AI with the AI Act, establishing a risk-based, rights-protective framework with the ambition of positioning the EU as a global standard-setter in the emerging competition over AI governance.


By prioritizing transparency, safety, and fundamental rights while prohibiting “unacceptable” AI uses such as social scoring, the EU leverages its regulatory power to project normative influence internationally, a dynamic described as the “Brussels effect” (Bradford, 2020).


This strategy reflects the EU’s long‑standing model of embedding ethical safeguards within market‑harmonisation objectives and a broader digital‑policy architecture spanning the General Data Protection Regulation (GDPR), the Data Act, the Digital Services Act (DSA), and the Digital Markets Act (DMA). In the global arena, this regulatory assertiveness functions not only as a governance blueprint but also as a geopolitical tool: it compels companies and third countries to align with EU standards to maintain market access, influencing legislative trajectories across the world.


Through this combination of ethical ambition and market leverage, the EU positions itself as a central actor in shaping the rules of global AI competition, even as debates continue over the balance between regulatory rigor and innovation capacity, as exemplified by the ongoing debate on the Digital Omnibus.


The EU Digital Omnibus and the Impact of Deregulation on EU’s global role in AI governance


The EU’s regulatory assertiveness forms part of a broader geopolitical strategy: projecting normative leadership globally through the “Brussels effect.” Csernatoni (2025) underscores that the EU’s regulatory approach is meant to set international benchmarks and influence global standards, leveraging the size of the EU single market to shape AI governance worldwide. This strategy hinges on values-based diplomacy and the diffusion of ethical norms.


In proposing the EU Digital Omnibus, the European Union is attempting to shift from regulation to innovation, aiming to simplify the EU Digital Rulebook to boost European competitiveness. According to Csernatoni, this challenges the EU’s global reputation as a regulatory superpower capable of setting the agenda for promoting trustworthy, human-centric AI systems. In her own words:


The EU has framed its shift toward deregulation as necessary and unavoidable because of heightened geopolitical risks, particularly Europe’s vulnerabilities in digital supply chains and AI infrastructure. Europe’s reliance on external providers for essential AI components, such as advanced semiconductors and cloud computing resources, exposes the bloc to strategic dependencies and potential exploitation by rival powers, notably China and the United States. Moreover, the EU’s pivot toward innovation demands careful consideration of AI systems’ dual-use nature for both civilian and military applications, which could inadvertently exacerbate geopolitical tensions or lead to arms races in autonomous weapons systems.



The Trump administration’s approach to AI, articulated in the July 2025 America’s AI Action Plan, reflects a distinctly market‑driven, deregulatory philosophy aimed at securing U.S. dominance in the global AI race. The plan is built around three core pillars – accelerating innovation, building domestic AI infrastructure, and asserting leadership in international AI diplomacy and security – and advances more than 90 federal actions designed to remove regulatory barriers, expand private‑sector‑led AI development, and streamline permitting for data centers and semiconductor facilities. It emphasizes open‑source and open‑weight models, protection of free speech and “American values” in frontier AI, and the promotion of U.S. AI exports as tools of geopolitical influence.


Accompanying executive orders reinforce this deregulatory thrust: they seek to prevent “woke AI” in federal procurement, accelerate infrastructure approvals, and promote the “American AI Technology Stack,” while preempting stricter state-level AI rules through a national framework that prioritizes minimal compliance burdens. Together, these actions frame AI as a national security imperative and reject precautionary governance models in favor of rapid innovation, global competitiveness, and ideological neutrality requirements for AI systems procured by the federal government.


The U.S. and the EU are charting increasingly divergent paths in the global AI landscape. The Trump administration’s strategy prioritises speed, deregulation, and geopolitical supremacy, trusting industry to lead. By contrast, the EU maintains a rights‑centric, precautionary regulatory model, centered on the AI Act’s binding rules, risk classifications, and prohibitions on high‑risk or unacceptable AI uses (Engler, 2023). However, the EU also recognizes that overly complex digital rules can hinder scale‑up, innovation, and competitiveness, and the recently launched Digital Omnibus initiative is set to address these gaps.


Overall, while the U.S. model prioritises speed, flexibility, and global competitiveness, the EU model continues to prioritise rights protection and harmonization, albeit now with growing attention to regulatory efficiency through initiatives like the Digital Omnibus. This creates a persistent transatlantic tension: U.S. firms face minimal federal constraints but regulatory fragmentation at state level, whereas EU firms face higher compliance burdens but benefit from legal certainty and simplified rules. The Digital Omnibus could narrow this gap only if simplification enhances efficiency without eroding the EU’s core rights-based safeguards.


Source: Author’s own compilation, based on the sources listed in the article.


China’s State-Centric Governance and Infrastructure-Led Influence

China's political, economic, and foreign policy landscapes have undergone significant transformations in recent years. With the New Generational Artificial Intelligence Development Plan, launched in 2017, the Chinese government has strategically positioned artificial intelligence as a core driver of economic transformation, with ambitious goals to establish itself as a global hub for AI innovation by 2030 (Morgan Stanley, 2025). China’s AI governance model is characterized by strong state leadership, a security‑first orientation, and a vertically structured regulatory system that targets specific technologies rather than relying on a single comprehensive law.

Domestically, Beijing was the first country to develop domestic governance of AI, and implement binding regulations on selected applications, such as algorithm guidelines and content generation (Cheng, Zeng, 2023). It has also sought to participate in global AI governance efforts, although its role until now has been limited, according to Matt Sheehan (2024).

The framework is expanded through detailed rules on recommendation algorithms, deep synthesis technologies, and generative AI – each designed to safeguard national security, maintain social stability, and enforce ideological alignment.

Jinghan Zeng has argued that China’s bold AI practices should be understood as part of Beijing’s adaptation strategy to governance by digital means. From this perspective, AI forms a crucial component of a broader digital technology package that the Chinese authoritarian regime utilises not only to enhance public service delivery but also to consolidate its authoritarian control (Zeng, 2020).


China’s Global AI Governance Initiative (GAIGI)


The Global AI Governance Initiative (GAIGI), launched in 2023, clearly illustrates Beijing’s attempt to position China as a leader in AI governance. This effort can be directly linked to China’s broader foreign policy goals and its rhetoric on international affairs. It shows that China's commitments to AI governance are more than just establishing a regulatory framework; they align with its larger global ambitions.

Beijing has been open about its concerns regarding AI safety and has included the potential dangers of emerging technologies in China’s first-ever national security white paper. Yet, in my opinion, China’s view may clash with Western norms and values, and its ambition to demonstrate global leadership would also be interpreted as demonstrating an alternative approach to the existing rules-based international order. From a Western perspective, this is particularly challenging as Chinese leadership and its massive AI sector might distort norms against the “best interests of the common good”, especially when it comes to individual rights and liberties.



Source: Author’s own compilation, based on the sources listed in the article.


Toward a Pluralistic Architecture of AI Governance


AI governance is increasingly turning into an evolving system shaped not only by rules but also by coordinated practices, shared standards, and new institutional arrangements. Recent international developments like the United Nations’ creation of the Global Dialogue on AI Governance and the Independent International Scientific Panel on AI mark one of the most significant steps toward inclusive global coordination, offering all 193 UN Member States equal participation and helping close gaps in responsible AI adoption.

These processes are intentionally inclusive and consensus-driven, reflecting a recognition that interoperable standards underpin global innovation and stability. At the same time, cooperation is taking new, distributed forms. AI governance is increasingly organised through networked mechanisms like the OECD and UNESCO ethics frameworks, African Union strategies, G7 and ASEAN soft‑law instruments, and the network of safety institutes emerging from the process.

Instead of forcing regulatory uniformity, this ecosystem accommodates multiple approaches while enabling cooperation across diverse political and economic contexts. This is even more evident according to research from the Cambridge Bennett School (2026), which highlights how regional blocs such as APEC, ASEAN, the African Union, and the G20 are pursuing “pragmatic pluralism,” achieving equivalent AI safety and accountability outcomes through context‑specific methods – regulatory sandboxes in Asia-Pacific, mandatory algorithmic safeguards in Southeast Asia, and representative dataset initiatives in Africa.

These efforts demonstrate that effective governance does not require identical rules; instead, legitimacy and functionality emerge from tailoring governance to local conditions.

Together, these examples reveal an emerging pluralistic architecture in which governance is built through distributed networks, regionally grounded experimentation, and inclusive global platforms – reshaping AI governance from a competitive regulatory race into a multilayered system of cooperative infrastructure.



Conclusion

Taken together, these developments suggest that AI governance is becoming a form of geopolitical infrastructure: a layered, dynamic, and distributed system through which states, regions, and international institutions negotiate power, set standards, and shape the future of technological order. The U.S. and China will undoubtedly continue to compete, but they will do so within an increasingly interconnected governance ecosystem shaped by cooperation and interoperability. As AI becomes ever more embedded in political, economic, and military systems, the durability of this pluralistic architecture, its ability to balance rights, innovation, and security across diverse contexts, will be central to determining whether the global AI order evolves toward fragmentation, hegemony, or a more inclusive and stable form of shared governance.


Transformative technologies require cooperative international governance to ensure safety, stability, and shared prosperity. Without coordinated global mechanisms, the world risks entrenching a fractured, unstable “AI order” that amplifies inequality, fuels strategic rivalry, and weakens trust in the systems increasingly mediating human life. It remains worthwhile to research and monitor policy developments regarding AI governance across international organisations and key national actors.


Sources:

Annual AI Governance Report 2025. (2025). The annual AI governance report 2025: Steering the future of AI. International Telecommunication Union (ITU) https://www.itu.int/epublications/publication/the-annual-ai-governance-report-2025-steering-the-future-of-ai

Atlcom Magazine. The transatlantic divide over AI is growing: Will China profit? https://www.atlcom.nl/magazine/the-transatlantic-divide-over-ai-is-growing-will-china-profit/

BBC (2025), Briefing: China's national security' white paper flags international concerns. https://monitoring.bbc.co.uk/product/b0003vjd

Bennett School of Public Policy. (2025, December 15). Global majority is building international AI governance through cooperation, not competition. AIXGEO. https://www.bennettschool.cam.ac.uk/blog/global-majority-is-building-international-ai-governance-through-cooperation-not-competition/

Bradford, A. (2020) The Brussels Effect: How the European Union rules the world. Oxford University Press.

Brookings Institution. (2025). The EU and U.S. diverge on AI regulation. https://www.brookings.edu/articles/the-eu-and-us-diverge-on-ai-regulation-a-transatlantic-comparison-and-steps-to-alignment/

Cheng, J., & Zeng, J. (2023). Shaping AI’s future? China in global AI governance. Journal of Contemporary China, 32(143), 794–810. https://doi.org/10.1080/10670564.2022.2107391

CIGI Online. (2025, July 22). China’s AI governance initiative and its geopolitical ambitions. https://www.cigionline.org/articles/chinas-ai-governance-initiative-and-its-geopolitical-ambitions/

Csernatoni, R. (2025, May 20). The EU’s AI power play: Between deregulation and innovation. Carnegie Europe. https://carnegieendowment.org/research/2025/05/the-eus-ai-power-play-between-deregulation-and-innovation

Engler, A. (2023). The EU and U.S. diverge on AI regulation: A transatlantic comparison and steps to alignment, https://www.brookings.edu/articles/the-eu-and-us-diverge-on-ai-regulation-a-transatlantic-comparison-and-steps-to-alignment/

HAI Stanford. (2025). The 2025 AI Index Report. https://hai.stanford.edu/ai-index/2025-ai-index-report

IAPP. (2025, October 8). Global AI governance law and policy: EU. https://iapp.org/resources/article/global-ai-governance-eu

Kerry, C. F., Meltzer, J. P., Renda, A., & Wyckoff, A. W. (2025, February 10). Network architecture for global AI policy. Brookings. https://www.brookings.edu/articles/network-architecture-for-global-ai-policy/

Lo, L. S. (2025). Artificial intelligence regulation matures: Landscapes of the USA, European Union, and China. IFLA Journal, 0(0). https://journals.sagepub.com/doi/10.1177/03400352251384915

MOFO Tech. (2026, January 22). Reynolds, T. D., & Doll, B. E. Jr. AI trends for 2026 – The federal government’s use and regulation of AI. https://mofotech.mofo.com/topics/ai-trends-for-2026---the-federal-government-s-use-and-regulation-of-ai

OECD. OECD.AI Policy Navigator: China (People’s Republic of). https://oecd.ai/en/dashboards/national/china-peoples-republic-of

Sheehan, M. (2024). Tracing the roots of China’s AI regulations. Carnegie Endowment for International Peace. https://carnegieendowment.org/research/2024/02/tracing-the-roots-of-chinas-ai-regulations?lang=en

Stanford HAI Agile Index Team. (2025). Insights into the ever-evolving landscape of AI governance: Agile Index Report 2025. https://hai.stanford.edu/ai-index/2025-ai-index-report

Stanford Digichina (2017), Full Translation: China’s ‘New Generation Artificial Intelligence Development Plan’ (2017) https://digichina.stanford.edu/work/full-translation-chinas-new-generation-artificial-intelligence-development-plan-2017/

The White House, America’s AI Action Plan. (2025, July) https://www.whitehouse.gov/wp-content/uploads/2025/07/Americas-AI-Action-Plan.pdf

United Nations, Global Dialogue on AI Governance. https://www.un.org/global-dialogue-ai-governance/en

United Nations, Independent International Scientific Panel on AI. https://www.un.org/independent-international-scientific-panel-ai/en

World Economic Forum. (2025, October 3). The UN has moved to close the gap in AI governance. Here’s what to know. https://www.weforum.org/stories/2025/10/un-new-ai-governance-bodies/

Zeng, J. (2020). Artificial intelligence and China’s authoritarian governance, International Affairs, 96(6), 1441–1459. https://doi.org/10.1093/ia/iiaa172


bottom of page