Chapter 14 — Governance and the EU AI Act
This chapter develops the governance and regulatory framework for AI deployment that has emerged through 2018–2026 and is continuing to evolve through 2027–2030. The framework is multi-layered, jurisdictionally diverse, and rapidly maturing. Substantial deployment decisions in 2026 are shaped as much by regulatory considerations as by technical or economic ones; firms that integrate governance thinking into deployment design from the start have advantages over firms that retrofit compliance after deployment.
The chapter’s central organising claim is that AI governance has converged on a risk-based approach — different risk levels triggering different regulatory requirements — but with substantial implementation variation across jurisdictions. The European Union’s AI Act, finalised in 2024 and entering progressive force through 2025–2027, is the most-comprehensive risk-based framework globally and substantially shapes the broader landscape through Brussels-effect dynamics. The United States has a more-fragmented approach combining federal executive action, state-level legislation, and sector-specific regulation. Other jurisdictions (UK, China, Australia, Malaysia, Singapore, Japan, Korea, Brazil) have developed their own frameworks that vary in scope, mechanism, and stringency. Sector-specific governance overlays the horizontal frameworks for specific high-stakes domains (medicine, finance, autonomous vehicles, professional services).
The Part II cases of preceding chapters provide direct case material for understanding how governance frameworks emerge and how deployment failures shape regulatory response. The Robodebt case (Section 12.1) substantially shaped Australian government-AI policy. Cambridge Analytica (Section 10.4) drove GDPR enforcement intensification. Boeing 737 MAX MCAS (Section 9.7) drove FAA Organization Designation Authorization reform. Watson Health (Section 7.3) influenced FDA AI/ML SaMD framework development. Klarna (Section 8.4) is influencing consumer-protection thinking on AI customer service. The cautionary cases motivate regulatory development; understanding them is foundational to understanding the governance landscape they have produced.
The chapter develops fourteen sections. Section 14.1 sets the overview. Section 14.2 covers the historical arc from Asilomar to Bletchley. Sections 14.3 and 14.4 cover the EU AI Act in detail. Section 14.5 covers the US landscape. Section 14.6 covers the UK approach. Section 14.7 covers China. Section 14.8 covers other jurisdictions including Australia, Malaysia, Singapore, Japan, Korea, and Brazil. Section 14.9 covers the privacy and data-governance overlay. Section 14.10 covers frontier-model governance. Section 14.11 covers sector-specific governance. Section 14.12 covers algorithmic accountability and impact assessment. Section 14.13 covers the compliance landscape. Section 14.14 sketches the 2026–2030 trajectory.
14.1 The AI governance landscape — overview and structure
AI governance has emerged as a distinct policy domain over a relatively short period. Before 2018, AI-specific regulation was minimal globally; the broader frameworks (privacy regulation, sector-specific regulation, general competition and consumer-protection regulation) covered AI implicitly without specific provisions. The 2018–2024 period produced the first wave of AI-specific regulatory development; the 2024–2027 period is producing a substantial maturation; the 2026–2030 trajectory will produce continued evolution.
Why AI governance is structurally distinct from prior tech governance. Several properties of contemporary AI deployment make existing regulatory frameworks insufficient.
First, AI systems can produce outcomes that are difficult to attribute to specific decisions. A foundation model trained on diverse data and deployed in many contexts produces outputs that emerge from the model’s learned representations rather than from explicit rules; identifying responsibility for specific outputs is structurally harder than for traditional software.
Second, AI deployment operates at substantially larger scale than prior technology deployment. A foundation-model-based service can serve hundreds of millions of users simultaneously; the consequences of any specific error or bias can scale to harm at population levels rather than affecting only individual users.
Third, AI capability is improving faster than regulatory frameworks can adapt. The 2023 ChatGPT release demonstrated capabilities that required substantial regulatory rethinking; the 2024 generative-video and agent waves required further rethinking; the trajectory continues. Regulatory frameworks designed for stable technologies face structural challenges with rapidly-evolving AI.
Fourth, AI deployment often crosses jurisdictional boundaries. A foundation model trained in one country, hosted in a second, accessed by users in many others, used by businesses in still others — the cross-jurisdictional dimensions complicate regulatory enforcement.
Fifth, AI deployment touches multiple existing regulatory domains simultaneously: privacy (data used to train models); employment (AI-augmented or AI-replaced labour); consumer protection (AI-mediated transactions); medical regulation (AI in clinical contexts); financial regulation (AI in lending, trading, advice); and more. The intersection complicates compliance and produces regulatory overlap.
The multi-layered governance structure. Contemporary AI governance operates at four overlapping layers.
The international layer includes UN initiatives (the 2024 UN Resolution on AI; the 2025 ongoing UN AI Advisory Body work), G7 and G20 frameworks (the Hiroshima AI Process from 2023; subsequent developments), and the AI Safety Summit series (Bletchley 2023, Seoul 2024, Paris 2025) that has produced voluntary commitments from major AI developers and governments.
The national layer is where binding regulation typically operates. The EU’s AI Act, the US executive-action framework, the UK’s pro-innovation framework, China’s generative AI provisions, and various other national frameworks operate at this layer.
The sectoral layer applies AI-specific provisions to existing sector-specific frameworks. Healthcare, finance, automotive, professional services, and other sectors have specific frameworks that overlay the horizontal AI regulation.
The organisational layer includes the internal governance frameworks that AI developers and deployers establish. Anthropic’s Responsible Scaling Policy, OpenAI’s Preparedness Framework, Google DeepMind’s Frontier Safety Framework, and analogous internal frameworks at major AI firms operate at this layer.
The four layers interact. International commitments shape national legislation; national legislation interacts with sectoral frameworks; sectoral frameworks shape organisational practice; organisational practice produces case material that informs subsequent international and national developments.
The risk-based approach as the dominant framework. Across most major jurisdictions, the dominant regulatory approach is risk-based: AI systems are classified by the risk they present, with different risk levels triggering different regulatory requirements. The EU AI Act provides the most-developed example (Sections 14.3 and 14.4); the US, UK, and most Asian frameworks have adopted broadly similar approaches with different specific implementations.
The risk-based approach has substantial appeal: it aligns regulatory burden with risk; it allows low-risk applications to proceed without heavy compliance burden; it focuses regulatory attention on high-risk applications. The approach has limitations: defining risk categorically is harder than it appears; risk can change as technology and deployment evolves; the categorical boundaries can produce strategic gaming (firms designing systems to fall just below thresholds).
The horizontal-vs-vertical regulation distinction. A specific structural question is whether AI regulation should be horizontal (general-purpose, applying across sectors) or vertical (sector-specific, applying within particular industries). The EU AI Act is primarily horizontal; the US framework is primarily vertical (with some horizontal elements in executive action and state legislation). Both approaches have merits and limitations. Horizontal regulation supports consistency across applications but may not adequately address sector-specific concerns. Vertical regulation supports sector-specific tailoring but may produce regulatory gaps for cross-sector applications and complications when AI applications span sectors.
The contemporary pattern is increasingly hybrid: horizontal frameworks providing baseline requirements; sectoral frameworks adding sector-specific requirements; the combination producing the operational compliance landscape that firms must navigate.
14.2 The historical arc — from Asilomar to Bletchley
AI governance has developed over a roughly decade-long period from primarily voluntary ethics frameworks to substantial binding regulation.
The early AI ethics era — Asilomar 2017. The Asilomar AI Principles, adopted at the Asilomar Conference on Beneficial AI in January 2017, were among the most-prominent early ethics frameworks. The 23 principles addressed research issues, ethics and values, and longer-term issues; they were endorsed by approximately 1,800 AI researchers and adjacent stakeholders. The principles were entirely voluntary and aspirational; they did not produce direct compliance obligations. Their influence has been substantial as a reference point for subsequent frameworks but limited as an operational mechanism.
The 2018–2022 industry-and-academic ethics codes. The 2018–2022 period produced an extensive body of AI ethics codes from professional organisations, industry consortia, government advisory bodies, and individual firms. Notable examples include: the IEEE Ethically Aligned Design (2019); the OECD AI Principles (2019); the EU High-Level Expert Group’s Trustworthy AI Framework (2019); the Montreal Declaration for Responsible AI (2018); the Singapore Model AI Governance Framework (first edition 2019); various sector-specific codes from medical, legal, and financial professional associations. Most of the frameworks were voluntary and aspirational; they shared substantial common content (transparency, accountability, fairness, safety, privacy, human autonomy) but produced limited direct compliance obligations.
The Jobin et al. (2019) review identified 84 AI ethics frameworks published in this period; the substantial overlap among frameworks suggested both convergence on shared concerns and limited operational differentiation. The voluntary nature of the frameworks was their primary limitation; firms could endorse the frameworks without changing operational practice.
The 2018 GDPR enforcement context. A specific contextual development was the May 2018 entry into force of the EU’s General Data Protection Regulation. While not AI-specific, GDPR’s substantial enforcement provisions (penalties up to 4% of global revenue or EUR 20 million, whichever is greater) and broad scope (applying to processing of EU residents’ data regardless of where the processing occurs) substantially shaped the regulatory environment that subsequent AI legislation would build on. The Cambridge Analytica revelations (Section 10.4) intensified GDPR enforcement; the 2018–2024 GDPR enforcement record produced cumulative fines exceeding EUR 4 billion and substantial substantive case law that AI-specific regulation builds on.
The 2021–2023 EU AI Act development. The European Commission’s April 2021 proposal for the AI Act began the substantive development of binding AI-specific regulation in a major jurisdiction. The two-year legislative process through the Council and Parliament produced substantial revision; the 2022–2023 ChatGPT inflection forced the addition of General Purpose AI provisions that the original proposal had not contemplated. The political agreement was reached in December 2023; formal adoption followed in 2024; the staged implementation began in 2025. The AI Act’s development is the most-detailed example of how AI regulation has emerged in a major jurisdiction and is treated in detail in Sections 14.3 and 14.4.
The 2023 Bletchley AI Safety Summit. The UK government convened the AI Safety Summit at Bletchley Park in November 2023, with attendance from 28 countries plus the EU and major AI firms. The summit produced the Bletchley Declaration — a non-binding statement signed by participating governments acknowledging the catastrophic and existential risks from frontier AI and committing to international cooperation on AI safety. The declaration was significant as the first multilateral AI-safety statement at this scale; the substantive commitments were modest. Major outcomes included the establishment of the UK AI Safety Institute (AISI; renamed AI Security Institute in 2025) and the commitment to subsequent summits.
The 2024 Seoul AI Safety Summit. The Seoul AI Safety Summit in May 2024, co-chaired by the UK and South Korea, extended the Bletchley framework with specific commitments. Sixteen frontier AI firms (including OpenAI, Anthropic, Google DeepMind, Microsoft, Meta, Amazon, xAI, Mistral, plus several Chinese firms) committed to specific safety commitments including the publication of frontier safety policies and explicit risk thresholds beyond which models would not be deployed. The commitments were voluntary but substantive; they produced the most-detailed industry-wide AI-safety framework to date.
The 2025 Paris AI Action Summit. The Paris AI Action Summit in February 2025, co-chaired by France and India, was structurally different from the Bletchley and Seoul summits. The Paris summit emphasised AI for the common good and the role of AI in global challenges (climate, health, education) rather than focusing primarily on safety risks. The summit produced the AI Action Declaration signed by approximately 60 countries, with explicit non-signature by the United States and the United Kingdom. The bifurcation reflected substantive disagreement about regulatory approach: some signatories (including France, India, and most EU member states) emphasised binding governance and broad public-interest framing; others (US, UK) emphasised voluntary frameworks and innovation-supportive approaches. The Paris pattern is informative: the international AI-governance consensus is partial, and major divergences in approach persist.
The 2026 trajectory. Subsequent AI summits continue to be planned through 2026 and beyond. The international layer is unlikely to produce comprehensive binding regulation; the structural differences across major jurisdictions and the pace of technological change make comprehensive multilateral regulation difficult. The international layer’s role is increasingly to coordinate national approaches rather than to substitute for them.
14.3 The EU AI Act — the most comprehensive framework
The EU Artificial Intelligence Act, formally Regulation (EU) 2024/1689, is the most-comprehensive AI regulation in any major jurisdiction. The Act applies to AI systems placed on the EU market or whose outputs are used in the EU, regardless of where the AI system is developed. The Act’s extraterritorial scope produces substantial Brussels-effect dynamics: AI firms operating globally typically design for EU compliance because the cost of differentiated systems for different markets is prohibitive.
The 2021 proposal through 2024 enactment. The European Commission’s April 2021 AI Act proposal began the formal legislative process. The original proposal focused on what would become the high-risk and prohibited categories; it did not address foundation models or general-purpose AI directly. The November 2022 ChatGPT release exposed this gap; the Council and Parliament added GPAI provisions through 2023. The political agreement was reached in December 2023; the European Parliament adopted the final text in March 2024; the Council formally approved in May 2024; the Act entered into force in August 2024 with progressive implementation through 2025–2027.
The risk-based framework — four categories. The Act classifies AI systems into four risk categories with different regulatory requirements.
Prohibited practices (Article 5) are AI applications that present unacceptable risk and are banned outright. The list includes: AI systems using subliminal techniques to materially distort behaviour; exploiting vulnerabilities of specific groups (children, persons with disabilities); social scoring by public authorities; real-time remote biometric identification in publicly accessible spaces (with limited law-enforcement exceptions); predictive policing based solely on profiling; emotion-recognition systems in workplaces and educational institutions; biometric categorisation systems inferring sensitive attributes; untargeted scraping of facial images from the internet for facial-recognition databases. Prohibited practices entered force in February 2025.
High-risk systems (Annex III) are AI applications in eight specified domains: biometrics; critical infrastructure (energy, water, traffic management); education and vocational training; employment, worker management, and access to self-employment; access to essential private and public services (credit scoring, insurance pricing, benefits assessment, emergency response); law enforcement; migration, asylum, and border control; administration of justice and democratic processes. High-risk systems must meet substantial conformity requirements (Section 14.4) before deployment.
Limited-risk systems (Article 50) are AI applications with specific transparency requirements. AI systems that interact with humans (chatbots), AI-generated synthetic content, and emotion-recognition or biometric-categorisation systems must disclose their AI nature to users. The transparency requirements are operational but not heavy.
Minimal-risk systems covers everything else — most commercial AI applications. Minimal-risk systems face no specific AI Act requirements beyond general legal frameworks (existing privacy, consumer protection, competition law).
The General Purpose AI (GPAI) provisions. The Act’s Chapter V contains specific provisions for GPAI models — foundation models that can be used for many different applications. The provisions include two tiers.
All GPAI models must meet baseline requirements: technical documentation; copyright compliance with respect to training data; an EU-compliant policy for handling EU copyrighted content; cooperation with the AI Office. The requirements entered force in August 2025.
GPAI with systemic risk — defined initially as models trained with more than 10^25 floating-point operations (FLOPs), with the AI Office authorised to designate additional models — face additional requirements: model evaluation against systemic risk; adversarial testing (red-teaming); serious-incident reporting; cybersecurity measures; energy consumption disclosure. As of late 2025, models known or believed to exceed the threshold include GPT-4 and successors; Claude 3.5 Sonnet, Claude 4.0/4.5/4.6/4.7 family; Gemini 1.5/2.0/2.5/3 family; possibly Llama 3.1 405B and Llama 4 family. Several Chinese frontier models also likely meet the threshold but their regulatory treatment in the EU is contested.
The compliance timeline.
- February 2025 — Prohibited practices in force; literacy provisions in force
- August 2025 — GPAI provisions in force; governance and AI Office structure operational
- August 2026 — High-risk systems requirements in force (with Annex III); transparency requirements in force
- August 2027 — Full implementation including high-risk systems under Annex I (safety-component systems in regulated products)
The staged timeline has produced substantial compliance preparation activity through 2025–2026; firms with EU operations have invested substantially in AI Act compliance infrastructure during this period.
The penalty structure. The Act establishes substantial penalties for non-compliance.
- Prohibited practices: up to EUR 35 million or 7% of global annual turnover, whichever is higher
- High-risk systems and GPAI obligations: up to EUR 15 million or 3% of global annual turnover
- Other violations: up to EUR 7.5 million or 1% of global annual turnover
The penalty structure is substantially higher than GDPR (which capped at 4% of global turnover); the deterrence intent is strong. Enforcement authority is shared across the European Commission’s AI Office, member-state authorities, and (for sectoral overlaps) sector-specific regulators.
The Brussels effect. The AI Act’s extraterritorial scope and substantial penalties produce strong incentives for global compliance. Firms operating in multiple jurisdictions typically design for the most-stringent applicable framework; for AI systems, that is currently the EU AI Act. The pattern matches the broader Brussels-effect dynamics of GDPR and other EU regulations: EU regulation effectively becomes global regulation through firms’ compliance choices. The 2025–2027 implementation period will substantially shape which AI systems are available globally and on what terms.
14.4 EU AI Act — high-risk systems and deployment requirements
The high-risk-systems provisions of the AI Act produce the most-detailed compliance requirements and are the most-relevant to many commercial AI deployments. The requirements draw on existing product-safety regulation (the Conformity Assessment framework for CE-marked products) but with AI-specific adaptations.
The Annex III high-risk categories in detail.
The eight Annex III categories cover applications where AI deployment can affect fundamental rights or critical societal interests:
Biometrics covers remote biometric identification systems, biometric categorisation, and emotion recognition. The category captures contemporary face-recognition deployment, voice-based identification, and adjacent biometric technologies.
Critical infrastructure covers AI systems used in safety-critical management of road traffic, water, gas, heating, electricity, and adjacent infrastructure. The category captures grid-management AI (Section 10.10), traffic-management systems, and adjacent infrastructure-AI deployment.
Education and vocational training covers AI systems used to determine access to educational institutions, evaluate learning outcomes, allocate students to educational tracks, and detect prohibited behaviour. The category has specific implications for AI-tutoring deployments (Section 12.2) — the systems’ specific applications determine whether they fall in or out of the high-risk category.
Employment, worker management, and access to self-employment covers AI used in recruitment (CV screening, candidate evaluation), promotion and termination decisions, task allocation, performance monitoring, and adjacent HR applications. The category has substantial implications for AI deployment in hiring and HR processes; the post-2024 regulatory developments in this area have been substantial.
Access to essential services covers AI used to evaluate creditworthiness (Section 6 covers the financial-services context), price insurance, evaluate eligibility for public benefits (Robodebt-style applications fall in this category), prioritise emergency response, and adjacent essential-services deployments.
Law enforcement covers AI used in risk assessment (predictive policing in modified form; the prohibited-practices section covers the most-aggressive predictive policing), evidence evaluation, profiling for investigation, and crime analysis. The category has substantial implications for police-and-justice AI deployment.
Migration, asylum, and border control covers AI used in visa-and-asylum decisions, security risk assessment, and document verification at borders.
Administration of justice and democratic processes covers AI used in interpreting facts and law, applying law to specific cases, and supporting judicial decisions. The category has substantial implications for legal-AI tools (Section 11.10).
The conformity assessment process. High-risk systems must undergo conformity assessment before being placed on the EU market or put into service. The assessment evaluates compliance with the Act’s substantive requirements:
- Risk management system. The provider must establish, implement, document, and maintain a risk management system that identifies and analyses known and foreseeable risks; estimates and evaluates the risks; implements appropriate risk-management measures.
- Data and data governance. Training, validation, and testing datasets must meet quality requirements (representativeness; minimisation of bias; appropriate statistical properties).
- Technical documentation. Comprehensive technical documentation of the system’s design, capabilities, limitations, and intended uses.
- Record-keeping. The system must enable automatic recording of events relevant to system operation and outputs.
- Transparency and information provision. The system must provide users with information about its capabilities, limitations, and intended use.
- Human oversight. Appropriate human-oversight measures must be designed into the system.
- Accuracy, robustness, and cybersecurity. The system must achieve appropriate levels of accuracy, robustness, and cybersecurity.
The conformity assessment is performed either by the provider (for some categories) or by a notified body (an independent third-party assessment organisation; for other categories). The assessment results in a CE marking and a Declaration of Conformity that allow the system to be placed on the EU market.
The post-market monitoring requirements. Compliance does not end at deployment. Providers must implement post-market monitoring systems to track system performance during use; report serious incidents to authorities; cooperate with regulators on incident investigation; update the system as risks evolve; maintain the technical documentation throughout the system’s lifecycle.
The post-market monitoring requirements are substantial and ongoing; they produce compliance costs that recur over the system’s deployment lifetime, not just at initial market entry.
The fundamental rights impact assessment (FRIA). A specific requirement for certain deployers of high-risk systems — public authorities, private entities providing public services, and entities deploying biometric or creditworthiness systems — is the Fundamental Rights Impact Assessment. The FRIA must:
- Describe the system’s intended use
- Identify the natural persons or groups likely to be affected
- Assess the specific risks to fundamental rights
- Describe the human oversight measures
- Specify the measures to be taken in case of materialisation of risks
The FRIA is conducted by the deployer, not the provider; it represents a deployer-side compliance obligation that complements the provider-side conformity assessment.
The deployment implications for specific Part II sectors.
For healthcare (Chapter 7), most clinical AI deployments fall in either the high-risk biometrics or high-risk essential-services categories (depending on application) or in adjacent regulatory frameworks (the Medical Device Regulation provides parallel requirements). The Watson Health pattern (broad scope without operational definition) would face substantial compliance challenges under the Act; the operational ML deployments at hospitals (predictive maintenance for equipment; specific diagnostic AI tools) would face manageable requirements.
For finance (Chapter 6), credit scoring and insurance pricing fall directly in the high-risk essential-services category. The compliance requirements layer on top of existing financial-services regulation; the combined burden is substantial. Specific products (algorithmic trading; fraud detection; underwriting) face different combinations of horizontal and vertical regulation.
For retail and e-commerce (Chapter 8), most applications fall in the minimal-risk category, with some specific applications (creditworthiness assessment for BNPL; certain employment-screening tools) in the high-risk category. The Klarna deployment would have faced specific transparency requirements under the limited-risk provisions but probably not the substantial high-risk requirements for the customer-service application itself.
For manufacturing (Chapter 9), most operational AI deployments fall outside Annex III but may fall under Annex I (safety-component systems in regulated products) when integrated into regulated products. The Boeing 737 MAX MCAS case illustrates the kind of safety-component system that the Act’s Annex I covers via reference to existing product-safety frameworks.
For government and public sector (Section 12.1), most applications fall in the high-risk category; the Robodebt-style failures would face substantial compliance challenges under the Act’s framework. The Act’s emphasis on human oversight, FRIA requirements, and post-market monitoring directly addresses the failure patterns that Robodebt exemplifies.
14.5 The US AI governance landscape
The United States approach to AI governance is structurally different from the EU’s. The federal government has not enacted comprehensive AI legislation comparable to the AI Act; the framework is built from executive action, sector-specific regulation, and increasing state-level legislation. The fragmentation produces specific dynamics — substantial regulatory activity but uneven coverage; jurisdictional variation across states; sectoral variation across regulators.
The Biden October 2023 executive order. Executive Order 14110, signed by President Biden on 30 October 2023, was the most-comprehensive single US executive action on AI. The order directed approximately 50 specific actions across federal agencies, including: NIST development of AI safety standards; Department of Commerce reporting requirements for certain frontier-model training (the 10^26 FLOP threshold for reporting); Department of Energy and Department of Homeland Security work on critical-infrastructure AI; Department of Health and Human Services AI safety in healthcare; Department of Education AI in education; Department of Justice AI in law enforcement; numerous adjacent provisions.
The order’s substantive impact was substantial through 2024 — federal agencies issued substantive guidance and rulemakings, and the broader executive-branch AI ecosystem accelerated. The order was rescinded by Executive Order 14148 signed by President Trump on 20 January 2025 (the day of the second Trump inauguration). Subsequent Trump administration AI policy through 2025–2026 has emphasised innovation-supportive framing, with substantially less prescriptive federal action than the Biden order had directed. The 2025 Trump executive orders (specifically the January 2025 EO on Removing Barriers to American Leadership in AI; the April 2025 EO directing the AI Action Plan) have shaped a different framework that is still developing.
The OMB M-24-10 guidance April 2024. The Office of Management and Budget Memorandum M-24-10, issued April 2024 under the Biden administration’s framework, established detailed requirements for federal-agency AI use. The memorandum required: agency Chief AI Officers; agency AI use-case inventories; risk management for AI used in rights-impacting and safety-impacting contexts; the CHCO and AI Officer roles. The memorandum’s substantive provisions remain in effect under the Trump administration through 2026, though the broader executive-branch enthusiasm has shifted.
State-level legislation. State-level AI legislation has accelerated through 2024–2026 in the absence of comprehensive federal regulation. Major state-level developments:
Colorado AI Act (signed May 2024; effective February 2026) — Colorado’s comprehensive AI consumer-protection legislation. The Act applies risk-based requirements to “high-risk artificial intelligence systems” used in employment, education, financial services, healthcare, housing, insurance, and legal services. The Act’s structure is broadly similar to the EU AI Act for high-risk systems, with US-specific adaptations.
California SB-1047 (the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act; passed legislature 2024; vetoed by Governor Newsom September 2024) — California’s high-profile attempt at comprehensive frontier-AI regulation. The bill would have established safety requirements for the largest AI models; the veto reflected innovation-policy concerns. California subsequently passed SB-53 (the AI Transparency Act, signed 2024) which has more-modest provisions, and AB-2013 (which addresses generative-AI training data transparency).
New York City Local Law 144 (effective July 2023) — NYC’s automated employment decision tools (AEDT) law, which requires bias audits for AI tools used in employment decisions in NYC. The law has produced substantial compliance activity by employers; the audit requirements have shaped the broader employment-AI-audit landscape.
Other state developments — Connecticut, Texas, Illinois, Washington, and other states have enacted or proposed AI-specific legislation through 2024–2026. The patchwork is substantial; the state-by-state variation produces complications for firms operating across multiple states.
Sector-specific federal regulation. Beyond horizontal regulation, US federal sector-specific regulators have substantial AI-specific authorities. The FDA framework for AI/ML medical devices (Section 7.7) is among the most-developed. The FTC has substantial authority over AI-related consumer protection and unfair-practices issues; the FTC’s 2023–2025 enforcement actions have addressed AI-marketing claims, deceptive AI products, and adjacent issues. The SEC has issued guidance on AI in financial services (2024 guidance on AI use in registered investment advisers). The CFPB has addressed AI in consumer lending. The FCC’s February 2024 ruling on AI-generated voice in robocalls is another example.
The post-2024-election regulatory direction. The 2025–2026 trajectory under the Trump administration has been shaped by general deregulatory orientation in some areas combined with continued sector-specific activity. The Trump administration has emphasised: removal of what it characterises as barriers to American AI leadership; preservation of US AI competitiveness against China; reduced focus on bias and equity-related concerns that had been emphasised in the Biden framework. The state-level legislation, sector-specific regulation, and case law continue to develop independently of federal direction.
The US framework is therefore substantially fragmented across federal executive action, sector-specific federal regulation, state legislation, and case law. Compliance for firms operating in the US requires navigating this fragmentation; the operational complexity is substantial.
14.6 The UK approach — pro-innovation and the AI Safety Institute
The United Kingdom’s AI governance approach has been distinctive among major jurisdictions. The 2023 White Paper A pro-innovation approach to AI regulation established the framework: existing sector-specific regulators (Ofcom, ICO, FCA, MHRA, etc.) would extend their remits to cover AI within their sectors, supplemented by horizontal coordination but without comprehensive AI-specific legislation. The approach contrasts sharply with the EU’s comprehensive horizontal framework.
The 2023 White Paper. The pro-innovation framework rested on five cross-sectoral principles (safety; transparency; fairness; accountability; contestability) implemented through existing regulators rather than through new AI-specific legislation. The framework was articulated as supporting innovation while addressing AI risks through tested regulatory mechanisms; critics argued that the framework would produce regulatory gaps and inadequate consumer protection.
The 2024 transition. The July 2024 UK general election produced a Labour government replacing the Conservative government that had developed the pro-innovation framework. The Labour government’s stated AI-policy direction in 2024–2025 emphasised broader regulation and explicit AI-safety legislation; specific legislative proposals are progressing through 2025–2026. The Labour government’s framework retains substantial elements of the pro-innovation approach (continued reliance on existing regulators) while adding AI-specific elements (proposed safety-related requirements for the largest models).
The AI Safety Institute / AI Security Institute. The UK established the AI Safety Institute (AISI) in November 2023 alongside the Bletchley Summit. AISI’s mission was to evaluate frontier AI models for safety risks; the Institute has access to pre-deployment versions of frontier models from major US and UK firms (under voluntary agreements). AISI was renamed the AI Security Institute in February 2025, reflecting a shift in framing from broader safety to security concerns specifically. AISI has produced substantial technical evaluation work; specific reports on frontier-model capabilities have been published through 2024–2026. AISI’s role in the broader UK and international governance landscape has been substantial; the parallel US AI Safety Institute (within NIST) has worked closely with UK AISI on shared evaluation methodology.
The UK regulatory-sandboxes approach. The UK has emphasised regulatory sandboxes — controlled environments where firms can test AI applications with regulatory flexibility. The FCA’s Innovation Hub and Regulatory Sandbox (operating since 2014, extended to AI applications) has been the most-prominent example. The approach has produced specific regulatory adaptations through testing rather than through advance rulemaking.
The contrast with EU approach. The UK’s framework is substantially less prescriptive than the EU AI Act. UK firms face fewer specific compliance requirements than EU firms operating equivalent applications. The framework supports faster deployment for innovative applications but produces less-detailed consumer protections. The 2024–2026 trajectory has shown some convergence toward more-prescriptive UK frameworks, though substantial gaps with the EU approach remain.
14.7 China’s AI governance framework
China’s AI governance framework is structurally distinctive — it combines specific generative-AI regulations with broader technology regulation, with particular emphasis on content control and political alignment. The framework operates differently from Western frameworks on multiple dimensions and produces specific compliance requirements for AI deployment in Chinese markets.
The 2023 Generative AI Provisions. China’s Provisional Administrative Measures for Generative AI Services (effective August 2023) established specific requirements for generative AI providers operating in China. The provisions require: pre-launch security review for generative AI products; content moderation aligned with Chinese law (including content prohibitions on subjects sensitive to the Chinese government); identity verification for users; protection of user data; protection against bias. The compliance regime is substantially different from EU or US frameworks; foreign generative AI providers operating in China face specific compliance requirements that domestic providers also face.
The broader regulatory framework. Beyond AI-specific regulation, China’s AI deployment operates within broader technology-regulation frameworks: the Personal Information Protection Law (PIPL, effective November 2021) for data privacy; the Cybersecurity Law for security and data localisation; the Data Security Law for data classification and cross-border transfer restrictions. The cumulative regulatory burden is substantial; foreign firms face additional restrictions on what data can be transferred outside China and what AI services can be provided within China.
The 2024 trajectory. Through 2024–2026, China has continued to refine its AI regulatory framework. Specific developments include: deepfake-specific regulations addressing synthetic content (effective January 2023, with substantial enforcement); algorithmic recommendation regulations (effective March 2022) that require platforms to provide algorithmic transparency to users; specific provisions for AI in financial services, healthcare, and education from sector-specific regulators. The framework is more prescriptive than the US framework but less codified than the EU framework.
The contrast with Western approaches. Three structural contrasts are notable.
First, China’s framework emphasises content control and political alignment in ways that Western frameworks do not. AI systems must align with “core socialist values”; content prohibitions cover politically sensitive topics. The requirements substantially constrain the AI services available in China; foreign firms providing generative AI services in China typically modify their products to comply.
Second, China’s framework includes substantial data-localisation requirements that EU and US frameworks do not. AI systems trained on Chinese user data must keep that data in China; cross-border data transfers face approval requirements. The localisation requirements complicate global AI services.
Third, China’s enforcement is substantially more centralised and politically directed than Western enforcement. Specific AI products have been required to delay launch or modify functionality based on regulatory review; the regulatory authority is broad and discretionary.
The deployment implications for foreign firms. Foreign firms operating in China face specific compliance requirements that materially affect AI deployment. Major US AI products (ChatGPT, Claude, Gemini) are not officially available in China; access is limited or unavailable. Chinese firms (Baidu’s Ernie, Alibaba’s Qwen, ByteDance’s Doubao, DeepSeek, Moonshot’s Kimi, and others) provide alternatives. The 2024–2026 dynamics have produced both substantial Chinese AI capability development (DeepSeek’s January 2025 R1 release was a particularly visible inflection) and substantial regulatory and competitive separation between Chinese and Western AI ecosystems.
14.8 Other jurisdictions — Australia, Malaysia, Singapore, Japan, Korea, Brazil
Beyond the EU, US, UK, and China, other major jurisdictions have developed their own AI governance frameworks. The patterns vary substantially in scope, mechanism, and stringency.
Australia. Australia’s AI governance framework has been developing through 2018–2026 with substantial post-Robodebt impetus. Key developments:
- AI Ethics Framework (DISER, 2019) — voluntary principles framework that established baseline expectations.
- Australian Government AI Use Policy (September 2024) — specific requirements for federal-government AI use, with explicit attention to the Robodebt lessons.
- Voluntary AI Safety Standards (January 2025) — voluntary standards for AI deployment by non-government entities.
- Mandatory AI Guardrails for High-Risk AI (in development, 2024–2026) — proposed mandatory requirements modelled partly on the EU AI Act for high-risk applications.
- Sector-specific frameworks — TGA for medical devices (Section 7.7); ASIC for financial services; OAIC for privacy.
- Privacy Act reform (in progress, with specific AI-related provisions) — extends Australia’s privacy framework with AI-specific elements.
The Australian framework is mid-stringency relative to global comparators — more prescriptive than the US framework but less so than the EU. The post-Robodebt legacy is substantial; the framework explicitly addresses public-sector AI in ways that other jurisdictions have not.
Malaysia. Malaysia’s AI governance framework has been developing through the National AI Roadmap (2021–2025) and related policy initiatives. Key elements:
- National AI Roadmap (2021) — strategic framework for AI development including governance principles.
- Personal Data Protection Act (PDPA; original 2010, substantially amended 2024) — Malaysian privacy framework with specific AI-relevant updates.
- Bank Negara Malaysia AI guidance — sector-specific guidance for financial services.
- Medical Device Authority guidance — sector-specific guidance for medical AI (Section 7.7).
- MyDigital framework — broader digital-economy framework that includes AI components.
The Malaysian framework is at earlier stage than EU or Australian frameworks; comprehensive horizontal AI legislation has not yet been enacted. The 2024–2026 trajectory has produced substantial regulatory development as Malaysia positions itself as a regional AI hub (Section 12.11).
Singapore. Singapore’s AI governance framework has been one of the most-developed in Asia. The Model AI Governance Framework (first edition 2019; second edition 2020; substantial update 2024) has been the most-influential regional framework. Singapore’s approach combines voluntary frameworks with sector-specific binding regulation; the IMDA (Infocomm Media Development Authority) and MAS (Monetary Authority of Singapore) have been particularly active. Singapore has emphasised practical implementation tools — the AI Verify framework (2022) provides tools for testing AI systems; the framework has gained adoption beyond Singapore.
Japan. Japan’s AI governance framework has been less prescriptive than EU or Australian frameworks. The Society 5.0 framework (2016 onwards) provided the broader vision; specific AI governance has emphasised voluntary industry frameworks plus sector-specific regulation. The 2024 Hiroshima AI Process code of conduct for advanced AI systems (developed through G7 process) has been Japan’s most-prominent international contribution.
South Korea. South Korea passed the AI Basic Act in December 2024, becoming the first major Asian jurisdiction to enact comprehensive AI legislation. The framework is structurally similar to elements of the EU AI Act (risk-based approach; specific high-risk categories) with Korean-specific adaptations. The Act enters force progressively through 2025–2026.
Brazil. Brazil’s AI legislation (the Marco Legal da Inteligência Artificial) has been progressing through the Brazilian Congress through 2023–2026. The proposed framework is broadly EU-AI-Act-aligned with Brazilian-specific provisions; final passage and implementation is expected through 2025–2027.
The patchwork pattern. The cumulative pattern across jurisdictions is substantial regulatory variation. Firms operating globally face complex compliance requirements that differ across markets. The compliance cost is substantial; the cost is concentrated at large firms with global operations and produces specific advantages for those with the resources to navigate the patchwork. Smaller firms typically focus on specific markets rather than attempting global compliance from the start.
14.9 Privacy and data-governance overlay
AI governance interacts substantially with privacy and data-governance frameworks that predate AI-specific regulation. The interaction produces specific compliance complications and shapes how AI deployment operates across jurisdictions.
GDPR’s continued enforcement. The General Data Protection Regulation (effective May 2018) covers AI deployment substantially because most AI systems process personal data. GDPR provisions particularly relevant to AI include: lawful-basis requirements for processing; data-subject rights including right to explanation for automated decision-making (Article 22); data-protection-by-design requirements; cross-border data transfer restrictions; substantial penalty provisions.
The Cambridge Analytica case (Section 10.4) was a substantial driver of GDPR enforcement intensification. Subsequent enforcement against Meta (EUR 1.2 billion fine in May 2023), Amazon (EUR 746 million in 2021), and many others has produced substantial case material. The cumulative enforcement record through 2024 exceeded EUR 4 billion in fines. The 2025–2026 enforcement trajectory has continued to produce major fines, including specific AI-related cases.
The Cambridge Analytica governance precedent. Cambridge Analytica produced multi-jurisdictional regulatory response: GDPR enforcement in the EU; FTC action in the US (the USD 5 billion settlement of July 2019); UK ICO action; state-level action in the US; substantial reform of platform-data-access frameworks. The cumulative regulatory consequences have shaped the broader AI-governance environment substantially. The case is structurally important: it demonstrates how a single high-profile failure can produce cross-jurisdictional regulatory response that constrains the entire industry’s subsequent deployment.
The PDPA framework in Malaysia and Singapore. The Personal Data Protection Act (PDPA) frameworks in Malaysia (effective 2013, substantially amended 2024) and Singapore (effective 2014, substantially amended 2020) provide privacy frameworks parallel to GDPR with regional adaptations. The 2024 Malaysian PDPA amendment substantially increased compliance requirements; the Singapore PDPA’s data-portability and broader consent provisions affect AI deployment patterns. The frameworks interact with AI-specific regulation as it emerges; the cumulative compliance burden is substantial for firms operating across the region.
The Australian Privacy Principles. The Australian privacy framework (the Privacy Act 1988 with the Australian Privacy Principles as implementation framework) has been substantially less stringent than GDPR. The 2024 Privacy Act reform (in progress through 2024–2026) is extending Australian privacy substantially, with specific AI-related provisions including a proposed right to challenge significant automated decisions and substantially increased penalty provisions. The framework’s interaction with the Australian government’s 2024 AI Use Policy and the proposed Mandatory AI Guardrails produces a multi-layered Australian framework.
The US patchwork. The United States lacks a comprehensive federal privacy framework comparable to GDPR. State-level privacy laws (California’s CCPA effective 2020 and CPRA effective 2023; similar laws in Virginia, Colorado, Connecticut, Utah, Texas, Tennessee, and many others) produce a patchwork. Sector-specific federal privacy rules (HIPAA for health; GLBA for financial; COPPA for children) cover specific contexts. The cumulative US privacy framework is substantial but fragmented; AI deployment must navigate the patchwork.
The interaction with AI-specific regulation. Privacy and AI-specific regulation interact in specific ways:
- Training data sourcing. AI training data must comply with privacy frameworks; the cumulative compliance cost has shaped what training data is available.
- Decision-making transparency. Privacy frameworks (particularly GDPR Article 22) require transparency for automated decision-making; AI-specific frameworks add additional transparency requirements.
- Data-subject rights. Privacy frameworks provide rights to access, correct, and delete personal data; the implementation in AI contexts is complex (deleting data from a trained model is structurally difficult).
- Cross-border data transfer. AI deployment often involves cross-border data flows; privacy frameworks restrict such flows in specific ways.
The cumulative privacy-and-AI compliance landscape is substantially more complex than either framework alone produces. The compliance practice has evolved through 2018–2026 to address the combined requirements; specific firms have developed sophisticated compliance infrastructure that smaller firms cannot easily match.
14.10 Frontier-model governance and AI safety
A specific dimension of AI governance covers frontier models — the largest and most-capable AI systems whose development requires substantial resources and whose deployment can have outsized consequences. The frontier-model governance framework is substantially developed at the organisational layer (firm-internal frameworks) and increasingly at the national and international layers.
Anthropic’s Responsible Scaling Policy. Anthropic’s Responsible Scaling Policy (RSP), first published September 2023 with substantial updates through 2024–2025, is the most-developed firm-internal frontier-AI safety framework. The RSP defines AI Safety Levels (ASL) ranging from ASL-1 (minimal risk) through ASL-4+ (substantial risks). Each level has specific deployment criteria and required safety measures; movement between levels requires specific evaluation against defined capability thresholds. The framework provides explicit commitments about what Anthropic will and will not do with models at different capability levels. The RSP has been influential beyond Anthropic; subsequent firm-internal frameworks have adopted similar structures.
OpenAI’s Preparedness Framework. OpenAI’s Preparedness Framework, first published in December 2023 with subsequent updates, addresses similar concerns with somewhat different structure. The framework defines tracked risk categories (cybersecurity, CBRN, persuasion, model autonomy) and risk levels for each (low, medium, high, critical). Models are evaluated against the framework before deployment; specific risk-level thresholds trigger specific mitigations. The framework has been updated several times through 2024–2025; OpenAI’s commitment to specific aspects of the framework has been controversial at times, with internal departures of safety-team members through 2024 producing public attention to the framework’s implementation.
Google DeepMind’s Frontier Safety Framework. Google DeepMind published its Frontier Safety Framework in May 2024 with subsequent updates. The framework covers similar territory to Anthropic’s RSP and OpenAI’s Preparedness Framework with Google-specific structure and capability-level definitions. The framework’s implementation has been progressive through 2024–2025.
The voluntary commitments — Bletchley/Seoul. The 2024 Seoul AI Safety Summit produced the Frontier AI Safety Commitments — voluntary commitments by sixteen frontier AI firms covering: publication of frontier safety policies; explicit risk thresholds; deployment-decision processes; transparency about implementation; ongoing engagement with the safety community. The commitments are voluntary but substantive; compliance is monitored through firms’ own publications and external evaluation. The 2025–2026 trajectory has produced uneven implementation — some firms have published substantive frameworks and demonstrated implementation; others have produced thinner frameworks or implementation that observers have characterised as inadequate.
The interpretability research. A specific dimension of frontier-model governance is interpretability research — work aimed at understanding what large models actually do internally. The mechanistic interpretability literature has grown substantially through 2022–2026. Major research programs at Anthropic (the dictionary-learning work; the recent Claude interpretability publications), OpenAI (specific interpretability publications), Google DeepMind, the broader academic community (the Mechanistic Interpretability Workshop tradition), and specific independent organisations (Apollo Research; ARC; the broader field) have produced substantial methodological progress. The 2026 state of the field is that interpretability has improved substantially but is not yet at a level that supports comprehensive auditing of frontier models.
The broader question of whether frontier-model governance frameworks are adequate is contested. Proponents argue that the frameworks represent substantial progress and continue to mature. Critics argue that the frameworks are voluntary, lack independent verification, and may not adequately address the most-significant risks. The 2026–2030 trajectory will substantially shape how this contest evolves; specific incidents (frontier-model deployment failures producing significant harm; or counter-examples of frameworks successfully averting harms) will inform the debate.
14.11 Sector-specific governance
Beyond horizontal AI governance, sector-specific frameworks impose specific requirements for AI applications in regulated industries. The frameworks are typically more detailed than horizontal frameworks within their specific scope, and they produce distinct compliance requirements that overlay the horizontal frameworks.
Healthcare and medical AI. Section 7.7 covered the medical-AI regulatory framework in detail: FDA AI/ML SaMD (US); EU MDR and IVDR; TGA (Australia); MDA (Malaysia). The 2024–2026 trajectory has produced substantial maturation in each jurisdiction. The FDA’s PCCP (Predetermined Change Control Plans) framework for continuously-learning systems has begun to produce clearances. The EU’s MDR/IVDR have been progressively implementing. The TGA’s 2024 update of AI-medical-device guidance extended the Australian framework. Sector-specific governance for healthcare AI is substantially more mature than for most other sectors; the deployment patterns reflect this maturation.
Financial services. Chapter 6 covered the financial-services regulatory framework. The 2024–2026 trajectory has produced specific AI-focused regulatory developments: the Federal Reserve’s 2024 Supervisory Letter on AI use; the OCC’s parallel guidance; the FCA’s 2024 AI Update including the AI Live Tests; ASIC’s AI guidance for Australian financial services; Bank Negara Malaysia’s continued elaboration of AI governance for Malaysian financial services. The financial-services AI regulatory framework is among the most-detailed sector-specific frameworks; the cumulative compliance burden is substantial but the operational discipline matches the sector’s broader operational requirements.
Automotive and aviation. The Boeing 737 MAX MCAS case (Section 9.7) produced substantial reform of US aviation certification through the 2020 ACA (Aircraft Certification, Safety and Accountability Act) and continuing FAA reforms. Automotive AI faces specific regulatory frameworks for autonomous vehicles (NHTSA in the US; UNECE WP.29 internationally); the 2024–2026 framework development has been substantial as the autonomous-vehicle industry has been progressively addressing the regulatory questions that the technology raises. The 2024 retrenchment in the AV industry (Section 12.4) has reduced some of the urgency around comprehensive AV regulation; the ongoing deployments (Waymo and others) continue to operate within developing regulatory frameworks.
Professional services. The Bar association responses to AI use (Section 11.10) — including the American Bar Association’s Formal Opinion 512 (July 2024) — represent the professional-services governance framework. State bar disciplinary processes have continued to develop through 2024–2026; specific cases applying the Mata v. Avianca framework have produced additional case material. The PCAOB’s AI guidance for audits, the Public Company Accounting Oversight Board’s continued attention to AI in financial-statement audits, and parallel developments in actuarial and tax practices represent the broader professional-services governance.
Telecommunications. Telecommunications regulators (FCC, Ofcom, OFCOM Australia, MCMC Malaysia) have addressed specific AI applications. The FCC’s February 2024 ruling on AI-generated voice in robocalls is a particularly visible example; broader AI-related telecommunications regulation has been developing through 2024–2026.
The cumulative pattern. The cumulative pattern is that major regulated sectors have AI-specific frameworks that overlay horizontal AI regulation. Compliance for AI deployments in regulated sectors requires navigating both horizontal and vertical frameworks; the interaction produces operational complexity that less-regulated sectors do not face.
14.12 Algorithmic accountability and impact assessment
A specific dimension of AI governance is algorithmic accountability — the framework for evaluating how AI systems make decisions that affect individuals and groups, identifying biases or harms, and providing mechanisms for affected parties to challenge decisions.
The fundamental rights impact assessment under EU AI Act. The FRIA requirement (Section 14.4) is the EU’s specific framework for algorithmic accountability for high-risk AI systems. The FRIA requires deployers to: identify affected individuals and groups; assess specific risks to fundamental rights; describe human oversight measures; specify mitigation measures. The framework’s implementation through 2025–2027 will produce substantial case material; specific FRIAs published by deployers will inform best practice across the industry.
US algorithmic accountability proposals. Federal US legislation on algorithmic accountability has been proposed multiple times (the Algorithmic Accountability Act of 2022 and successors) without enactment. State-level legislation has produced specific frameworks: NYC Local Law 144 (Section 14.5) requires bias audits for AI employment tools; the Colorado AI Act (Section 14.5) extends similar requirements more broadly. The cumulative US framework is fragmented but produces substantial compliance activity in covered jurisdictions.
Audits and impact assessments. A specific operational mechanism for algorithmic accountability is the audit. AI audits typically evaluate specific dimensions: bias across demographic groups; accuracy across different populations; explainability of decisions; appropriate scope of use. The audit ecosystem has developed substantially through 2020–2026; specific firms (Babl AI, ORCAA, AlgorithmWatch, and many others) provide audit services. The methodology is still maturing; audit quality varies substantially across providers; the 2024–2026 trajectory has produced increasing attention to audit methodology and standards.
The Robodebt-style failure as governance motivator. The Robodebt case (Section 12.1) is structurally important for understanding why algorithmic-accountability frameworks have developed. Robodebt’s failures — the income-averaging approach was unlawful; the affected populations could not effectively challenge automated decisions; the appeal-and-review pathway did not scale with deployment volume — exemplify the kinds of harms that algorithmic-accountability frameworks aim to prevent. The Australian framework’s specific provisions (the proposed right to challenge significant automated decisions; the Mandatory AI Guardrails; the 2024 government AI policy) explicitly address Robodebt-style failure modes.
The contestability question. A specific dimension of algorithmic accountability is contestability — the ability of affected parties to challenge automated decisions. Different frameworks address contestability differently. GDPR Article 22 provides a right to obtain human review of significant automated decisions; the EU AI Act adds specific contestability requirements for high-risk systems; the proposed Australian framework includes contestability requirements; the Brazilian Marco Legal includes similar provisions. The implementation of contestability in practice is substantially varied; the 2026–2030 trajectory will produce more case material on what effective contestability looks like in practice.
14.13 The compliance landscape and operational implications
The cumulative governance landscape produces substantial operational implications for AI deployment. Firms must navigate the multi-layered, jurisdictionally-varied, sector-specific framework while maintaining commercial viability.
The compliance cost. Estimates of AI compliance cost vary substantially. A 2024 Centre for European Policy Studies analysis estimated AI Act compliance cost at EUR 30,000–400,000 per high-risk AI system depending on complexity. Industry surveys (KPMG, Deloitte, EY) through 2024–2025 have produced larger estimates that include the broader compliance infrastructure (chief AI officer; AI governance committee; ongoing monitoring; documentation). For large firms operating multiple AI products in multiple jurisdictions, annual AI compliance cost is in the millions of dollars.
The competitive implications. The compliance cost has competitive implications. Larger firms can absorb substantial compliance investment; smaller firms cannot. The pattern has produced concerns about regulatory capture — that large incumbent firms benefit from regulations that smaller competitors cannot match. The concerns are partly valid (compliance costs do produce barriers to entry) and partly oversimplified (the safety and consumer-protection benefits of regulation are not captured in pure compliance-cost analysis).
The fragmentation problem. The cumulative regulatory fragmentation across jurisdictions produces substantial operational complexity. Firms operating globally must comply with EU AI Act, US state laws, sector-specific federal regulation, UK frameworks, China’s framework, and many others. The compliance staff and infrastructure required is substantial. The fragmentation is not converging quickly; the 2026–2030 trajectory will likely continue to produce substantial jurisdictional variation.
The operational best practices. Firms have developed specific operational practices for navigating the compliance landscape:
- AI governance committees with representation from legal, compliance, engineering, business, and ethics functions.
- AI use-case inventories tracking what AI is used where in the firm.
- Pre-deployment review processes applying specific compliance and risk-assessment tests before AI products are launched.
- Ongoing monitoring tracking AI performance and compliance during deployment.
- Incident response protocols for handling AI-related incidents that may have regulatory or compliance implications.
- Documentation infrastructure maintaining the technical documentation that compliance frameworks require.
- External audit and assurance relationships providing independent evaluation.
The practices are substantially more mature at large firms than at smaller firms; the 2026–2030 trajectory will produce broader maturation as the compliance landscape stabilises.
The operational implications for the unit’s playbook chapters. The Part V playbook chapters develop AI deployment discipline that anticipates the compliance landscape. Chapter 21 (MVP scoping) explicitly addresses the operational definition that compliance frameworks require. Chapter 23 (evaluation) develops the evaluation discipline that conformity-assessment frameworks require. Chapter 24 (alpha launch) and Chapter 25 (beta) develop the staged-deployment discipline that risk-management frameworks support. Chapter 28 (commercialisation) explicitly addresses regulatory and compliance considerations as part of commercialisation planning. The playbook discipline aligns with the regulatory landscape; firms that follow the playbook discipline are better-positioned for compliance than firms that do not.
14.14 The 2026–2030 trajectory and convergence question
The AI governance landscape in 2026 is comprehensive but unstable. Several trajectories will shape the 2026–2030 evolution.
Trajectory 1 — EU AI Act full implementation. The Act’s high-risk-systems requirements enter force August 2026; full implementation continues through August 2027. The 2026–2027 implementation will produce substantial case material on how the Act operates in practice. Specific enforcement actions, specific compliance precedents, and specific judicial interpretations will progressively clarify the framework. The 2028–2030 trajectory will see substantial maturation of EU AI Act practice.
Trajectory 2 — US fragmentation continuation. The US framework is unlikely to converge into comprehensive federal legislation through 2026–2030. The state-level patchwork will continue to develop; sector-specific federal regulation will continue to evolve; the federal executive direction will continue to shift with administrations. The fragmentation is structural to the US political system; firms operating in the US will continue to face substantial compliance complexity.
Trajectory 3 — UK clarification. The Labour government’s regulatory direction is still developing through 2025–2026; specific legislative proposals are progressing. By 2027–2028, the UK framework will likely be substantially clearer; the question is whether it converges toward EU stringency or maintains differentiated pro-innovation positioning.
Trajectory 4 — international convergence vs divergence. The international AI governance pattern through 2026–2030 will involve both convergence dynamics (Brussels-effect; international summit commitments; sector-specific harmonisation) and divergence dynamics (substantively different regulatory philosophies; geopolitical separation between Western and Chinese AI ecosystems; specific national-priority differences). The net effect will likely be partial convergence — substantial common ground on core issues; substantial differences on specific provisions and enforcement.
Trajectory 5 — frontier-model governance maturation. The frontier-model governance frameworks (RSP, Preparedness Framework, Frontier Safety Framework) will continue to develop through 2026–2030. The voluntary commitments may evolve toward more-binding frameworks; specific incidents will inform framework adjustment. The interpretability research will continue to produce capability that supports more-rigorous governance; the 2030 frontier-model governance landscape may differ substantially from the 2026 landscape.
Trajectory 6 — compliance professionalisation. The AI compliance and governance profession is rapidly developing. Specific credentials (AI governance professional; AI ethics officer; AI auditor) are emerging; substantial educational programmes are training compliance professionals; the consultancy market for AI compliance services is growing. The 2030 compliance landscape will be substantially more professionalised than the 2026 landscape; the operational discipline at firms will reflect this maturation.
The bridge to subsequent Part III chapters: Chapter 15 develops the labour-and-economic effects that this chapter has touched on. Chapter 16 develops the maturity framework that allows specific deployments to be assessed against capability and operational maturity. Chapter 17 integrates the analytical frameworks. Chapter 18 returns to specific cases at greater synthesised depth.
The governance landscape is increasingly central to AI deployment. Firms that integrate governance thinking into deployment design from the start have advantages over firms that retrofit compliance after deployment. The cautionary cases of Part II — Robodebt, Cambridge Analytica, Boeing MAX, Watson Health, Klarna — provide the empirical foundation for understanding why governance frameworks have developed and how they shape contemporary deployment. The frameworks themselves are not the deployment goal; the goal is AI deployment that produces value while avoiding the specific harms that the cautionary cases exemplify. The frameworks are the operational mechanism for achieving that goal.
References for this chapter
EU AI Act
- European Parliament and Council (2024). Regulation (EU) 2024/1689 (the AI Act).
- European Commission AI Office (2024, 2025). Implementation guidance and decisions.
- European Commission (2021). Proposal for a Regulation on Artificial Intelligence.
US AI governance
- Executive Office of the President (2023). Executive Order 14110, 30 October 2023.
- Executive Office of the President (2025). Executive Order 14148 (rescinding EO 14110), January 2025.
- Office of Management and Budget (2024). Memorandum M-24-10.
- Colorado General Assembly (2024). Colorado Artificial Intelligence Act, SB 24-205.
- New York City Council (2021). Local Law 144 (effective 2023).
UK governance
- UK Department for Science, Innovation and Technology (2023). A pro-innovation approach to AI regulation. White Paper.
- UK AI Safety Institute / AI Security Institute (2024, 2025). Reports and evaluations.
International summits
- UK Government (2023). Bletchley Declaration, November 2023.
- Republic of Korea and UK (2024). Seoul AI Safety Summit Frontier AI Safety Commitments.
- France and India (2025). Paris AI Action Declaration, February 2025.
China
- Cyberspace Administration of China (2023). Provisional Administrative Measures for Generative Artificial Intelligence Services.
- People’s Republic of China (2021). Personal Information Protection Law.
Other jurisdictions
- Government of Australia (2024). Australian Government AI Use Policy.
- Department of Industry, Science and Resources (2024, 2025). Voluntary AI Safety Standards; Mandatory AI Guardrails for High-Risk AI consultations.
- Government of Malaysia (2021). National AI Roadmap.
- IMDA Singapore (2020, 2024). Model AI Governance Framework.
- South Korea National Assembly (2024). AI Basic Act.
Privacy and data governance
- European Parliament and Council (2016). General Data Protection Regulation (Regulation EU 2016/679).
- Information Commissioner’s Office UK (2017–2024). GDPR enforcement actions.
- US Federal Trade Commission (2019). Settlement with Facebook Inc.
- Personal Data Protection Department Malaysia (2024). PDPA amendments.
- Office of the Australian Information Commissioner (2024). Privacy Act reform commentary.
Frontier-model governance
- Anthropic (2023, 2024, 2025). Responsible Scaling Policy (multiple versions).
- OpenAI (2023, 2024, 2025). Preparedness Framework (multiple versions).
- Google DeepMind (2024). Frontier Safety Framework.
- Multiple AI firms (2024). Frontier AI Safety Commitments at Seoul AI Safety Summit.
AI ethics and governance literature
- Jobin, A., Ienca, M., and Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence 1: 389–399.
- Future of Life Institute (2017). Asilomar AI Principles.
- OECD (2019). OECD Principles on AI.
- IEEE (2019). Ethically Aligned Design.
Sector-specific governance
- US Food and Drug Administration (2021, 2023, 2024). AI/ML SaMD Action Plan and PCCP guidance.
- US Federal Reserve, OCC (2024). Supervisory communications on AI in financial services.
- American Bar Association (2024). Formal Opinion 512 on AI use by lawyers.
- US Federal Communications Commission (2024). Declaratory ruling on AI-generated voice in robocalls.
Compliance and operational implications
- Centre for European Policy Studies (2024). AI Act compliance cost analysis.
- KPMG (2024). AI governance survey.
- Deloitte (2024). AI compliance landscape report.
- BCG, McKinsey (2024, 2025). AI governance benchmarks.