AI IndustryPolicy

Global AI Regulation: The EU AI Act, China's AI Rules, and the Emerging Patchwork of Governance

AI regulation is rapidly taking shape across the globe. The EU AI Act enters enforcement, China implements comprehensive AI governance, and the US debates federal legislation. We analyze the key regulatory frameworks, their differences, and the implications for global AI development.

James WongDec 18, 202510 min read
Share:

TL;DR

The global AI regulatory landscape has crystallized around three distinct approaches: the EU's comprehensive, risk-based framework (AI Act); China's sector-specific, government-directed approach; and the US's largely industry self-regulatory model, though momentum is building for federal legislation. These divergent frameworks create compliance challenges for global AI companies while reflecting fundamentally different values about the role of technology in society.

What Happened

The EU AI Act, passed in March 2024, has entered its phased enforcement period. As of February 2026, the prohibition on "unacceptable risk" AI systems is in effect — banning social scoring, real-time biometric surveillance in public spaces (with limited exceptions), and AI that manipulates human behavior. The "high-risk" provisions, covering AI in healthcare, employment, education, and law enforcement, take full effect in August 2026.

China has taken a different path, implementing a series of targeted regulations rather than a single comprehensive law. The Interim Measures for the Management of Generative AI (effective August 2023) require AI services to align with "core socialist values" and undergo security assessments before public release. The Deep Synthesis Provisions mandate disclosure of AI-generated content. And the Algorithm Recommendation Regulations require transparency in algorithmic decision-making. China's approach gives the government significant oversight over AI development while still encouraging rapid innovation.

The United States remains the outlier among major AI-developing nations with no comprehensive federal AI legislation. However, the landscape is shifting. The Biden-era Executive Order on AI established safety testing requirements for frontier models. Multiple bipartisan bills are under consideration, including proposals for mandatory AI impact assessments, algorithmic accountability, and deepfake disclosure. Several states — California, Colorado, and Illinois — have passed their own AI laws, creating a patchwork of compliance requirements.

Why It Matters

For AI companies operating globally, the fragmented regulatory landscape creates significant compliance complexity. A model that is legal to deploy in the US might require modifications for the EU market and could be entirely prohibited in China. This is leading to "regulatory arbitrage," where companies choose development and deployment locations based on regulatory favorability, and "compliance overhead," where substantial engineering resources are devoted to meeting different jurisdictional requirements.

More fundamentally, these regulatory choices will shape the trajectory of AI development. The EU's emphasis on transparency and fundamental rights may slow certain applications but build public trust. China's government-directed approach enables rapid deployment in state-approved use cases but constrains independent research. The US's lighter touch fosters innovation but raises concerns about accountability and public safety.

Technical Details

Key regulatory requirements and their technical implications:

  • EU AI Act — High-Risk Requirements — Mandatory risk assessments, data governance documentation, human oversight mechanisms, accuracy/robustness/cybersecurity testing, and registration in the EU database. Technical implication: requires extensive model documentation, bias testing pipelines, and audit-ready logging systems.
  • EU AI Act — Foundation Model Requirements — Providers of general-purpose AI models (GPAI) must provide technical documentation, comply with copyright law, and publish training data summaries. Models posing "systemic risk" face additional obligations including adversarial testing and incident reporting.
  • China — Security Assessment — Generative AI services must pass security assessments evaluating content safety, data compliance, and algorithmic fairness before public deployment. Technical implication: requires comprehensive content filtering systems aligned with Chinese regulatory standards.
  • US — Executive Order Requirements — Companies training models using more than 10^26 FLOPS must report training details and safety test results to the Department of Commerce. Technical implication: requires compute measurement and documentation systems for frontier training runs.

What's Next

The trend is clearly toward more regulation globally. The G7 Hiroshima AI Process is working toward international AI governance norms. The UN has established an AI Advisory Body developing global governance recommendations. India, Brazil, and Japan are all drafting AI legislation. The key question is whether these efforts will converge toward interoperable standards or diverge further into incompatible regulatory islands. The AI industry is increasingly advocating for harmonized international standards to reduce compliance fragmentation.

Share:

Related Articles