Artificial intelligence has emerged as a transformative technology with profound implications for society, economy, and national security. However, its rapid development and deployment have outpaced the establishment of effective governance frameworks, creating a complex landscape of legal and ethical challenges that states must navigate. While pioneering regulatory approaches have emerged globally, significant gaps remain in international coordination, enforcement capacity, and the resolution of fundamental tensions between innovation, fundamental rights, and public safety.
Regional Governance Frameworks and Approaches
The European Union’s Binding Regulatory Model
The European Union has established the most comprehensive AI governance framework globally through its AI Act, which became operative from August 1, 2024. The Act implements a risk-based classification system categorizing AI systems into four levels: unacceptable risk (prohibited), high risk (strictly regulated), limited risk (transparency obligations), and minimal risk (no specific requirements). High-risk AI systems must adhere to strict obligations including risk management, transparency, human oversight, and data governance. The regulatory approach applies to both EU-based and third-country providers placing AI systems on the European market, creating extraterritorial effects. Non-compliance penalties are substantial, reaching up to €35 million or 7 percent of global annual turnover, whichever is higher. The enforcement structure involves a centralized European AI Office supervising general-purpose AI models, complemented by national competent authorities managing implementation and enforcement at the member state level.
United States: Fragmented Federal and State Approaches
The United States has adopted a decentralized regulatory approach, with no unified federal AI liability regime currently in place. Rather than comprehensive legislation, the government has issued non-binding frameworks including the White House AI Bill of Rights (2022) and the Executive Order on AI (2023). At the federal level, the proposed Artificial Intelligence Research, Innovation, and Accountability Act (S. 3312) seeks to establish requirements for generative AI and high-impact systems, designating the Secretary of Commerce as the primary enforcement authority. The Act proposes civil fines up to $300,000 or twice the value of the AI system involved in noncompliance, with a 15-day grace period for certain violations. State-level regulation has proliferated rapidly, with more than 1,000 state AI bills introduced in 2025, creating a patchwork of requirements. Notable state legislation includes Colorado’s Concerning Consumer Protections in Interactions with Artificial Intelligence Systems Act (Colorado AI Act or CAIA), prohibiting algorithmic discrimination in high-risk systems, and California’s Transparency in Frontier Artificial Intelligence Act requiring companies to report risks and safeguards.
China’s State-Controlled Approach
China has adopted a centralized governance model emphasizing state control and cultural values, with government approval required for AI technology sales including speech recognition, text recognition, and personalized recommendation systems. This approach prioritizes national security and state oversight while maintaining rapid technological development, reflecting broader strategic priorities around technological sovereignty.
The United Kingdom’s Principles-Based Framework
The United Kingdom has opted for a principles-based approach rather than comprehensive legislation, relying on existing sector regulators and legal frameworks such as the UK GDPR and Equality Act. The UK AI Safety Institute has established itself as a leader in international AI safety research and standards development, taking an active role in global governance coordination without binding domestic regulations.
Critical Legal Challenges
Algorithmic Bias and Discrimination
One of the most pressing legal challenges states face involves algorithmic discrimination and the perpetuation of bias in AI systems. AI systems can embed biases present in training data or deliberately incorporated during design, leading to disparate treatment of protected classes. Courts have increasingly recognized that companies using AI and vendors supplying AI can be held liable for discriminatory conduct when their systems produce disparate impacts affecting protected groups. The legal doctrine of “disparate impact” has proven particularly powerful in addressing AI bias, as it allows plaintiffs to challenge facially neutral policies that disproportionately harm individuals belonging to protected classes without requiring proof of intentional discrimination. Major cases illustrate the scope of this challenge: in Mobley, five individuals over age forty were rejected by hundreds of jobs using Workday’s system without interviews, leading to a successful age discrimination claim under the Age Discrimination in Employment Act; in Huskey v. State Farm, an algorithmic fraud detection system allegedly subjected Black policyholders to additional administrative delays; and in SafeRent cases, housing discrimination algorithms resulted in a $2 million settlement.
The complexity deepens because determining liability for algorithmic discrimination differs fundamentally from traditional discrimination law. While disparate treatment claims require proof of intentional discrimination, disparate impact doctrine focuses on outcomes, making it more effective for addressing algorithmic bias despite lack of transparency into system design. However, the opacity of modern AI systems—particularly deep learning models that operate as “black boxes”—creates evidentiary challenges even under disparate impact frameworks.
Transparency, Explainability, and the “Black Box” Problem
Trust in AI systems fundamentally hinges on transparency, yet many of the most potent AI models function as opaque “black boxes” providing highly accurate results without clear explanations for their decisions. This lack of explainability raises critical governance concerns, particularly in regulated industries such as finance, healthcare, law enforcement, and criminal justice where algorithmic decisions directly impact fundamental rights. The challenge is not merely technical; it reflects a profound tension between model performance and interpretability. Deep learning algorithms achieve accuracy through processing vast amounts of data through numerous layers of neural networks, making their decision-making processes largely inscrutable even to their creators.
Governance frameworks increasingly mandate transparency mechanisms such as Explainable AI (XAI) and algorithmic auditing to provide stakeholders insights into AI decision-making processes. The EU AI Act requires high-risk AI systems to be transparent and auditable, with providers obligated to maintain detailed technical documentation and establish conformity assessment procedures. However, implementing meaningful transparency faces substantial technical and practical obstacles. Organizations must provide clear explanations of how AI systems collect, store, and use personal data while balancing comprehensiveness with accessibility. Additionally, documentation requirements have expanded dramatically, with organizations required to maintain comprehensive audit trails and technical documentation of AI models, including training methodologies, data sources, and validation procedures.
Data Privacy and the Tension with AI Development
A fundamental tension exists between AI’s data requirements and stringent data protection regulations, particularly the EU’s General Data Protection Regulation (GDPR). While sophisticated AI systems typically require large volumes of training data to function effectively, GDPR’s principles of data minimization and purpose limitation explicitly restrict the collection and use of personal data. This creates several interconnected challenges.
The principle of data minimization directly conflicts with AI development needs, as machine learning algorithms for natural language processing, image recognition, and other tasks require substantial datasets whose size correlates directly with model performance. Additionally, purpose limitation principles prevent data collected for one purpose from being repurposed without explicit authorization, yet AI systems often involve multiple downstream applications that may not be anticipated during initial data collection. Managing dynamic consent—allowing users to withdraw consent at any time—requires robust data management systems and audit trails that many organizations struggle to implement. Moreover, AI-driven systems may aggregate data from multiple sources, making it difficult to trace the origin and purpose of individual data points, creating consent violations.
The GDPR assigns data subject rights including access to data, correction of inaccuracies, and data deletion. However, deleting individual data entries may compromise the performance or integrity of AI systems, creating additional compliance challenges. Furthermore, AI systems possess the ability to infer new information from existing data; even if an individual’s data is not explicitly included in a training dataset, AI systems can generate identifying insights that fall outside GDPR’s current scope on personal data. Non-compliance with GDPR can result in penalties reaching 4 percent of global annual revenue or €20 million, whichever is greater, and the EU AI Act introduces additional enforcement mechanisms compounding these financial implications.
Accountability and Liability in AI-Driven Systems
A fundamental legal challenge involves determining responsibility for harm caused by AI systems. Unlike traditional software or products where causation and responsibility are relatively clear, AI systems present novel attribution problems. When an algorithm makes a decision that causes harm, multiple actors may share responsibility: the developers who designed the system, the organizations deploying it, and the users implementing its recommendations. Traditional liability doctrines based on negligence and product liability struggle to accommodate the nature of algorithmic decision-making.
The EU has attempted to address this through the AI Liability Directive (AILD) and updated Product Liability Directive (PLD), establishing strict liability regimes for high-risk AI systems that shift the burden of proof to deployers. These directives establish presumptions of causation that ease the burden on claimants seeking damages from AI-related harms—a significant departure from traditional tort principles requiring proof of specific causation. In contrast, the United States maintains a fragmented approach rooted in traditional tort law distributed across state jurisdictions, creating uncertainty for organizations operating nationally or globally. States often require companies testing autonomous systems to shoulder liability, while federal agencies like the National Highway Traffic Safety Administration set vehicle safety standards and the Food and Drug Administration regulates medical AI devices.
A critical emerging concern involves insurance coverage gaps. Many directors and officers (D&O) insurance policies now include AI-related liability exclusions, leaving organizations without coverage for AI governance failures. This creates direct exposure for corporate leadership, who face fiduciary duties to ensure appropriate risk management of AI systems. The complexity deepens when considering that static annual audits prove insufficient for adequate oversight; boards require continuous visibility into AI deployments with real-time monitoring to identify risks before they materialize.
Autonomous Weapons Systems and National Security
The development and potential deployment of lethal autonomous weapons systems (LAWS) presents profound legal and ethical challenges involving fundamental questions of state responsibility and human control. Under international humanitarian law, states must ensure that LAWS they develop or deploy comply with applicable laws, and states remain accountable for military actions taken with these systems. However, the question of whether truly autonomous AI systems—”independent and endowed with a degree of autonomous agency”—can be attributed to states for legal accountability purposes remains contested. If such systems possess genuine autonomous decision-making capacity, the traditional link between human conduct and state action becomes “too vague and weak” to ground attribution of conduct to the state.
The principle of human control requires that persons remain accountable under international humanitarian law for use of LAWS, yet determining what constitutes meaningful human control remains contentious. The challenge extends beyond military applications; states increasingly employ AI in intelligence analysis, surveillance, and law enforcement contexts where accountability mechanisms remain underdeveloped. The U.S. National Security Framework for AI distinguishes between prohibited AI uses (such as emotional state inference without consent, sole reliance on biometrics to infer protected characteristics, and final immigration decisions without human oversight) and high-impact AI uses requiring enhanced oversight (including biometric tracking, national security threat classifications, and AI outputs used as sole basis for intelligence assessments). However, globally, monitoring adoption of AI-based technologies by diverse threat actors presents acute governance challenges with significant potential for technological surprise and unanticipated risks.
Ethical Challenges in AI Development and Deployment
Human Oversight and Decision-Making Authority
The European Union’s AI Act emphasizes the principle of human oversight, mandating that certain AI systems be designed and developed to be “effectively overseen by natural persons during the period in which they are in use” with appropriate human-machine interface tools. This principle reflects the ethical conviction that humans should retain meaningful control over significant decisions, particularly those affecting fundamental rights. High-risk AI systems must provide users with instructions containing information about anticipated characteristics, capabilities, and limitations, based on sufficient testing during development and analysis of training datasets.
However, implementing meaningful human oversight presents substantial practical challenges. Organizations must equip staff with appropriate skills, resources, and competencies to handle AI responsibly in day-to-day work. The requirement that human overseers remain aware of automation bias—the tendency to defer to automated systems—proves particularly challenging, requiring that humans maintain capacity to stop or override AI decisions. Documentation of human oversight is mandatory under the EU AI Act, along with assessment of how it could mitigate potential risks. For high-risk systems, deployers must ensure that humans charged with oversight have sufficient training and authority, receive proper support to do their jobs correctly, and that their oversight work does not impede other tasks required to operate AI systems correctly. Continuous monitoring and logging throughout an AI system’s entire lifecycle are mandatory, requiring suitable organizational structures and processes that enforce constant supervision.
Fairness, Non-Discrimination, and Inclusion
Ensuring fairness and preventing discrimination across diverse populations represents a fundamental ethical imperative in AI governance. AI systems must uphold moral principles and respect human rights, which requires establishing clear frameworks for assigning responsibility and ensuring redress for individuals affected by AI-related incidents. Ethical concerns extend beyond preventing obvious discrimination to ensuring that AI systems serve diverse populations equitably. Training data representativeness presents particular challenges; datasets reflecting limited demographic diversity will inherently perform worse for underrepresented populations. Similarly, the subjective nature of AI design means that human values—including biases and preferences—become embedded in systems during development phases.
Privacy and Data Governance
Privacy protection through AI governance extends beyond GDPR compliance to encompass fundamental ethical principles of human dignity and autonomy. Individuals should retain control over their personal data and understand how AI systems process it. This requires organizations to protect end-user privacy and sensitive data through strong AI governance and data usage policies. Transparency in AI processes must enable accountability and trust from stakeholders and the public. The challenge involves creating systems where humans can meaningfully understand and control how their data influences AI decisions.
Environmental and Societal Well-being
Governance frameworks increasingly recognize that AI systems must consider societal and environmental implications beyond narrow technical performance metrics. The computational demands of training large AI models carry environmental costs; addressing sustainable computational practices becomes part of responsible AI governance. Additionally, broader societal impacts must be assessed—how AI implementation affects employment, social cohesion, political processes, and cultural values in affected communities.
International Coordination Gaps and Fragmentation
The Proliferation of Uncoordinated Frameworks
As of August 2025, artificial intelligence governance operates through a complex ecosystem of overlapping international, regional, and national frameworks characterized by proliferation exceeding coordination capacity. The emergence of over 1,000 state AI bills in 2025 demonstrates regulatory multiplication without proportional coordination enhancement. While international declarations and national strategies multiply, systematic mechanisms for effective coordination remain severely limited.
The global AI governance landscape demonstrates significant fragmentation across coordination effectiveness patterns. Technical standards development achieves the highest coordination success, reflecting universal interest in interoperability. However, trade and competition coordination operates at moderate levels through existing mechanisms, while human rights and ethics coordination remains characterized by “high rhetoric, limited operational coordination.” Most concerning, national security considerations show “minimal coordination” despite active competition between major powers.
Major Power Approaches and Strategic Competition
The regulatory capacity to develop and deploy AI is currently highly concentrated among three major actors: the European Union, the United States, and China. These entities have adopted fundamentally divergent approaches reflecting different values and governance philosophies. The EU foregrounds comprehensive new regulations with binding enforcement mechanisms, the United States takes a more laissez-faire approach with decentralized state regulation, and China relies on a hybrid approach combining industry self-discipline with targeted secondary legislation and state control. This triadic competition creates a strategic challenge; while multilateral agreement between these three players might theoretically suffice for mitigating key risks, little consensus exists regarding necessary policy responses.
Coordination Mechanisms and Their Limitations
Formal coordination bodies exist but operate with limited effectiveness. The Global Partnership on AI functions as a multi-stakeholder coordination platform, the World Economic Forum operates the AI Governance Alliance for private-public coordination, and the OECD maintains a Network of Experts on AI for technical coordination. However, these mechanisms face significant constraints. UN mechanisms integrating with national frameworks show “low coordination” effectiveness, while mechanisms for major power coordination with each other remain minimal. The recently announced UN High-Level Advisory Body on AI, Global AI Scientific Panel, and Global AI Dialogue Platform represent efforts to enhance international coordination, but implementation remains nascent.
The G7 Hiroshima AI Process and G20 AI Principles demonstrate declaration-based coordination with “limited operational coordination,” lacking mechanisms for translating declarations into coordinated action. Even regional integration efforts face challenges; while the EU maintains high coordination internally through its governance structures, ASEAN, the African Union, and other regional blocs are still developing internal coordination.
Cross-Border Regulatory Conflicts and Compliance Burdens
Organizations operating across multiple jurisdictions face acute compliance challenges as AI governance requirements diverge systematically. Cross-border operators confront binding regulation in the EU expecting that AI systems undergo documented risk assessment, human oversight, logging, data governance processes, robustness controls, post-market monitoring, and external audit or conformity mechanisms. If an organization develops or deploys AI into the EU market, it must build its product-development cycle to generate compliance-evidence rather than simply adding legal clauses. In the United States, the absence of federal legislation creates a different challenge; companies cannot rely on the absence of federal law and must instead map which states apply, what “high-risk” means in those contexts, and how liability emerges from existing laws rather than bespoke AI obligations.
The United Kingdom’s principles-based approach means that “no AI Act” does not equal “no regulation”; AI systems deployed in employment, finance, housing, or public services contexts still trigger obligations under existing legal frameworks including UK GDPR and the Equality Act. Canada’s pending AI statute sets expectations for future compliance even before legislation enacts, requiring organizations to anticipate requirements. Across all four major jurisdictions, a clear pattern emerges: AI systems are being treated as regulated infrastructure subject to oversight, traceability, and audit-ready documentation, rendering the era when one could deploy algorithms with standard licensing clauses effectively obsolete.
Emerging Governance Tools and Mechanisms
Regulatory Sandboxes for Controlled Testing
Recognizing the challenges of regulating rapidly evolving technologies, jurisdictions globally have adopted regulatory sandboxes—legal oversight frameworks providing participating organizations opportunities to experiment with emerging technologies within controlled environments combining regulatory oversight with reduced enforcement. Sandboxes enable organizations to test innovative products with real-world data under regular regulatory supervision, allowing regulators to observe how AI behaves with unforeseen variables and identify risks early. The EU AI Act requires all member states to establish national or regional regulatory sandboxes with particular emphasis on annual reporting, tailored training, and priority access for startups and small-to-medium enterprises.
Regulatory sandboxes generate multiple benefits. For regulators, they encourage data-informed policy by surfacing real-time concerns and opportunities during legislative processes, help agencies build capacity through interaction with industry, and facilitate development of best practices regarding how new technologies intersect with established laws. For organizations, particularly startups and SMEs lacking resources for complex compliance, sandboxes provide regulatory certainty, reduce time to market, foster knowledge sharing, and position organizations as forerunners in governance-conscious development. Across the globe, sandboxes have expanded rapidly: Kenya’s sandbox enabled scalability-focused AI development, Singapore launched a new sandbox in July 2025 addressing AI agent deployment challenges, France and Brazil established sector-specific sandboxes, and Utah created a U.S. state-level sandbox.
However, sandboxes present challenges and limitations. The structures must balance learning and innovation against meaningful consumer protection, requiring careful governance design. Additionally, sandboxes do not themselves solve the underlying cross-border fragmentation challenge; organizations testing in multiple jurisdictions’ sandboxes must still navigate divergent final requirements.
Human Oversight Implementation
Implementing human oversight as mandated by the EU AI Act and emerging governance frameworks globally requires translating ethical principles into operationalized practice. Organizations must establish clear oversight mechanisms such as ethics committees or review boards to monitor compliance and guide ethical decision-making. For certain AI use cases under the EU AI Act, high-risk systems require decision verification by two “natural persons” before deployers take action, such as for biometric identification systems. Technical solutions supporting human oversight include Explainable AI techniques enabling stakeholders to understand how models arrive at conclusions, user interfaces highlighting data points that disproportionately influenced AI outputs (such as ChatGPT’s citation mechanisms), and systematic testing during development phases combined with ongoing monitoring post-deployment.
Integrating human oversight throughout the AI lifecycle proves critical; during design phases, mechanisms must enable human intervention with easy understanding and monitoring of AI systems; during deployment, continuous monitoring and evaluation ensures systems remain within ethical boundaries; post-deployment, capacity to intervene and rectify systems when unforeseen circumstances arise proves essential.
Post-Market Monitoring and Incident Reporting
Modern governance frameworks increasingly require ongoing surveillance of AI systems after deployment. The EU AI Act mandates post-market monitoring, facilitates information sharing on incidents and malfunctions, and ensures market surveillance authorities can act on their own initiative or upon receiving complaints from any person suspecting an infringement. Market surveillance authorities possess investigative and corrective powers, enabling them to impose administrative fines for various infringements and, when operators fail to take adequate corrective action within specified periods, to stop or limit system availability or use.
Real-time monitoring proves essential because static annual audits prove insufficient to detect emerging risks or drift in AI system behavior over time. Requirements for documentation and audit trails help establish accountability while enabling rapid response when performance degrades or unforeseen behaviors emerge.
Capacity-Building and Inclusive Global Governance
The Developing Country Capability Gap
Significant asymmetries in AI governance capacity exist between developed and developing nations. While developed countries possess technical expertise, research infrastructure, and regulatory institutions necessary for AI governance, many lower-and-middle-income countries lack the foundational hardware, software infrastructure, technical expertise, and institutional capacity required for effective governance. This capacity gap creates risks of governance failure in developing regions and perpetuates inequalities in who shapes AI governance globally.
Recognizing these challenges, the United Nations has launched initiatives addressing AI capacity-building. China and Zambia jointly proposed the “Artificial Intelligence Capacity-Building Action Plan For Good and For All” at the UN General Assembly, emphasizing that establishing solid infrastructure foundations enables countries to effectively leverage AI according to local conditions. The plan addresses infrastructure development, industry and ecosystem empowerment, talent development, public literacy enhancement, data construction, and safety governance. Significantly, it emphasizes that “teaching people to fish, rather than giving them fish” underscores that every country should cultivate its own talent in developing original AI capabilities, respecting national sovereignty in AI development.
International Cooperation and North-South Partnership
Effective global AI governance requires meaningful partnership between developed and developing nations, with developed countries sharing experiences in AI evaluation and regulation while assisting underdeveloped countries and regions in bridging the digital divide. Local data sourcing for AI training proves essential to incorporate local civilizations and cultures into AI learning processes, enabling systems to better serve specific regions. The Global Digital Compact commitment to advancing AI capacity-building through international partnerships represents a nascent framework attempting to address these inequalities, though implementation mechanisms remain underdeveloped.
Emerging Risk Categories and Safety Concerns
Political System Risks and Disinformation
Risks to political systems and societies increase in likelihood as generative AI development and adoption widens. Proliferation of synthetic media threatens to erode democratic engagement and public trust in governmental institutions through creation of fake news, personalized disinformation, and manipulation of populations. These risks stem from AI’s capacity to serve as a force multiplier for malicious actor capabilities, proliferating and enhancing threat actor competencies while increasing attack speed, scale, and sophistication.
Physical System Vulnerabilities
Physical risks rise as generative AI becomes embedded in critical infrastructure and physical systems. Implementation without adequate safety and security controls may introduce new risks of system failure and vulnerabilities to attack. Insecure use and misuse create risks of data leaks, biased and discriminatory systems, or compromised human decision-making through poor information security and opaque algorithmic processes producing “hallucinations” (confidently presented false information). Over-reliance on supply chains that are opaque, potentially fragile, and controlled by small numbers of firms creates cascade failure risks.
National Security and Geopolitical Competition
Investments in AI and related technologies, whether inbound or outbound, present significant national security ramifications including potential unauthorized data exposure, access by foreign adversaries to sensitive technology, supply chain vulnerabilities, and intellectual property theft. Nation-states, strategic competitors, and other adversaries may leverage AI-related investments to create disinformation campaigns aimed at spreading false narratives, influencing opinions, and undermining trust through deepfakes and other synthetic media.
Regulatory responses to these threats have multiplied. The Committee on Foreign Investment in the United States (CFIUS) reviews transactions for national security effects, while the Outbound Investment Security Program (OISP) regulates certain AI transactions with nexuses to specific countries of concern. The Department of Justice’s Data Security Program imposes data security compliance obligations on certain non-passive investments involving foreign countries meeting specified thresholds.
Monitoring and Transparency Limitations
A fundamental challenge in AI governance involves limited government insight into private sector AI development and deployment progress, constraining regulatory capacity to mitigate safety and security risks. Monitoring adoption of AI-based technologies by diverse potential threat actors proves challenging, creating significant potential for technological surprise and unanticipated risks. Regulatory responsiveness must accelerate as technological development continues to outpace governance frameworks, yet the capacity constraints remain acute in most jurisdictions.
The Challenge of Regulatory Coherence and Efficiency
Balancing Safety, Innovation, and Fairness
States face a fundamental trilemma in AI governance: promoting innovation necessary for economic competitiveness while ensuring public safety and protecting fundamental rights. The EU’s comprehensive regulatory approach prioritizes safety and rights protection while potentially constraining innovation through compliance burdens and delayed time-to-market for startups. The U.S. approach prioritizes innovation with lighter-touch regulation while creating fragmentation and uncertainty regarding liability and compliance requirements. Neither approach perfectly balances all objectives, and different states have made conscious trade-off choices based on their values and strategic priorities.
Moving from Principles to Operationalized Practice
A critical challenge involves translating abstract ethical principles into concrete, operationalized governance mechanisms. Many frameworks articulate principles such as fairness, transparency, accountability, and human oversight without specifying how organizations should implement them in practice. The conceptual gap between “principle” and “operationalized practice” remains substantial; determining what “meaningful transparency” requires, what constitutes “effective human oversight,” or how to assess “fairness” in specific contexts requires detailed, context-dependent work that generalizable frameworks struggle to provide.
Technical and Institutional Expertise Gaps
Many regulatory bodies lack adequate technical expertise to assess AI system compliance, oversee complex algorithmic systems, or project emerging risks. Building this expertise requires sustained institutional investment, hiring specialists, and developing training programs—commitments many governments have only recently begun making. The challenge intensifies because AI development outpaces regulatory understanding; by the time governance responses mature, technologies have often evolved significantly, requiring constant regulatory adaptation.
Critical Recommendations for States
Enhance International Coordination
States should prioritize strengthening weak international coordination mechanisms for AI governance rather than establishing new centralized institutions. The OECD should be foregrounded as a centre of expert AI knowledge facilitating peer pressure among states and policy harmonization. Scrutiny of different nodes in the emerging regime complex should ensure they fulfill appropriate functions based on democratic mandates. The UN High-Level Advisory Body on AI can support mapping of the international AI ecosystem and provide recommendations for aligning institutional remits around common goals.
Develop Binding International Norms on High-Risk Applications
While comprehensive global legislation remains unrealistic given geopolitical divisions, states should pursue binding international agreements on particularly dangerous applications such as lethal autonomous weapons systems, AI-enabled mass surveillance, and AI applications affecting fundamental rights in criminal justice and asylum decisions. The Council of Europe’s work developing a legally binding international convention on AI and human rights provides a promising foundation.
Invest in Regulatory Capacity and Expertise
States must substantially invest in building domestic regulatory capacity including hiring technical experts, establishing monitoring infrastructure, and creating institutional processes for adaptive governance as technologies evolve. Regulatory sandboxes should complement but not substitute for sustained regulatory institution-building.
Establish Clear Liability and Responsibility Frameworks
States should clarify liability frameworks assigning responsibility for AI-caused harm, moving beyond traditional tort principles to establish strict liability or presumptions of causation for high-risk AI systems. This reduces legal uncertainty for organizations while ensuring that victims have clear pathways to redress.
Support Capacity-Building in Developing Nations
Developed states should commit genuine resources to supporting capacity-building in developing countries, including infrastructure investments, technical training, and sharing of regulatory experiences. Such support should respect national sovereignty while helping countries develop indigenous AI capabilities aligned with local values and needs.
Mandate and Operationalize Human Oversight
States implementing AI governance should establish specific, enforceable requirements for meaningful human oversight in high-risk systems, not merely abstract principles. This includes mandates for training, resources, and processes ensuring humans retain decision-making authority over consequential AI applications.
The governance of artificial intelligence represents one of the most pressing regulatory challenges of our era. As states navigate complex tensions between innovation and safety, national security and fundamental rights, and concentrated technological capacity and inclusive global governance, they must establish frameworks that are simultaneously flexible enough to accommodate rapid technological change and sufficiently robust to protect public interests and fundamental values. No jurisdiction has yet achieved a fully satisfactory balance, but the emerging frameworks—particularly the EU AI Act’s comprehensive regulatory approach combined with regulatory sandboxes for innovation—provide promising foundations upon which more effective, coordinated global governance can develop.