The Brief
AI’s impact on law is systemic, and the transformation it is precipitating is structural. The systems and institutions through which law operates are under pressure from democratic erosion and rule of law deterioration. This convergence constitutes systemic legal risk: the destabilization of the foundational conditions that AI transformation requires and AI governance depends on. That is this issue’s throughline. Two major reports document the scale of decline. A landmark research paper argues that current approaches to building AI for democracy are fundamentally flawed and proposes a new technical agenda. The first binding international AI treaty has entered into force with a concrete human rights impact assessment framework. The judiciary faces an AI competency gap at the precise moment its role matters most.
On the Radar: Strategic Signals
92 Autocracies, 87 Democracies: The Operating Environment for AI Governance in 2026
V-Dem Democracy Report 2026 reports that the world now has 92 autocracies and 87 democracies, with 74% of the world population, 6 billion people, living in autocracies as of the end of 2025. The backsliding is not confined to countries already classified as autocratic: a record 41% of the world’s population now live in countries actively autocratizing. The United States lost its status as a liberal democracy for the first time in over 50 years. US civil rights, equality before the law, freedom of expression, and media freedom were measured at the lowest levels in 60 years. Only 7% of the global population, 0.6 billion, live in liberal democracies. The scale of these findings means that autocratic countries now possess the economic and political weight to reshape international organizations, norms, and trade, precipitating historic geopolitical and systemic legal risk.
The Liberties Rule of Law Report 2026, drawn from nearly 40 civil society organizations across 22 EU member states, documents the impacts of this inside legal systems:
- Gaps in enforcement of judgments. Over 650 European Court of Human Rights judgments are waiting for full implementation.
- 61% of all assessed rule of law recommendations for member states show no progress or no visible progress in areas such as justice, anti-corruption, media freedom, and checks and balances. An additional 13% are actively backsliding on rule of law measures.
- Legislative environments are compounding enforcement gaps. The report highlights fast-track, non-consultative law-making among EU institutions, including the Digital Omnibus, which proposes changes to the Artificial Intelligence Act, GDPR, and ePrivacy Directive. Combined with the report’s findings on erosion of civil liberties, failures to ensure fundamental rights safeguards, and sustained campaigns to undermine watchdogs, AI governance is directly implicated.
- Surging repression of civil society, a key constituency for advancing public interest in AI transformation, weakens a critical oversight pillar.
- Beyond institutional erosion, the report identifies a deeper problem: declining public understanding that the rule of law matters posing another dimension of uncertainty and risk.
Our takeaway: Systemic legal risk belongs on policy and board agendas. 44 countries experienced democratic decline in 2025. AI governance frameworks assume stable rule of law conditions: independent judiciaries, enforceable regulations, predictable policy environments. Where those foundations are weakening, AI governance strategy should include responsive measures.
→ V-Dem Institute: Democracy Report 2026 (March 18, 2026)
→ Civil Liberties Union for Europe: Liberties Rule of Law Report 2026
Why Current Approaches to Building AI for Democracy Fall Short
A major research paper published March 3 by Gillian K. Hadfield, Rakshit Trivedi, and Dylan Hadfield-Menell at the Knight First Amendment Institute identifies a central design flaw in how the AI field has approached building for democracy. Current technical approaches treat democratic alignment as a parameter problem: elicit values, encode them as fixed constraints, and pre-load them into a model. Constitutional AI, democratic input mechanisms, and law-following AI all share this assumption. Hadfield, Trivedi, and Hadfield-Menell argue the assumption is wrong. Democracy is not a set of rules to encode. It is a complex adaptive system constituted by the voluntary cooperation of millions of agents (people and institutions) enforcing shared norms through daily decisions. The institutions of democracy, including independent courts, do not merely declare rules. They function as spaces where civic relationships are enacted and where decentralized enforcement is produced. AI agents are now entering this system as economic actors, making decisions about hiring, procurement, contracting, insurance, compliance, and content. Without what the authors call normative competence, the computational capacity to detect a dynamic normative environment, predict its trajectory, and participate in the distributed enforcement mechanisms that democratic order depends on, AI agents will erode precisely the enforcement incentives that sustain democracy. Current alignment approaches were not designed to address this.
The paper lays out a technical research agenda with two components. First, building normative competence into individual AI agents: the capacity to read and adapt to evolving social norms at inference time rather than relying on pre-trained values. Second, constructing new digital classification institutions, what the authors call Model Specification Institutions, that are legible to AI, operate at AI speed and scale, and are governed by democratic processes rather than by AI developers. These institutions would provide real-time normative guidance through transparent APIs, grounded in citizen assemblies, expert panels, and digital juries.
Our Takeaway: No instruction set can anticipate every normative decision an AI agent will face. Hadfield’s research agenda proposes building dynamic normative competence into AI agents and creating democratic institutions that operate at AI speed. In the context of democratic erosion reported by V-Dem, AI agents deployed at scale risk compounding the erosion of the very norms they should be reinforcing. Hadfield’s proposed agenda offers a potential solution. Policy leaders and builder should engage with it.
The First Binding International AI Treaty Enters Force with a Human Rights Impact Assessment Framework
On March 11, 2026, the European Parliament backed EU ratification of the Council of Europe Framework Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law. The Convention entered into force on November 1, 2025, and has been signed by 37 countries, including the EU representing its 27 member states, the United Kingdom, Canada, the United States, Japan, Ukraine, and Uruguay.
The treaty’s most consequential provision applies to both public bodies and private sector organizations: parties must require that entities carrying out AI-related activities conduct iterative risk and impact assessments of actual and potential effects on human rights, democracy, and the rule of law. A dedicated tool, HUDERIA (Human Rights, Democracy, and Rule of Law Impact Assessment), was developed alongside the Convention to operationalize this requirement. HUDERIA provides a four-element methodology: context-based risk analysis mapping risk factors across application, design, and deployment contexts; stakeholder engagement; structured risk and impact assessment evaluating severity, scope, reversibility, and probability; and a mitigation plan. The methodology covers 17 defined areas of concern, from physical integrity and privacy to judicial independence and democratic participation.
For access to justice, the implications are direct. AI systems used in legal aid delivery, court administration, case management, and judicial decision support are within scope. The obligation to assess impacts on equality before the law, access to remedy, and the independence of legal processes applies to both governments deploying these systems and the private companies building them.
The Convention sits on top of the EU AI Act, not beside it. Legal tech companies face layered obligations: the EU AI Act’s technical risk classification requirements and the Convention’s human rights and rule of law impact assessments operating through different legal mechanisms on the same products.
The ENSURED research program, which tracks the Convention’s implementation, notes that its value will depend on whether ratifications extend beyond Europe and whether follow-up mechanisms build accountability over time. The EU AI Act carries direct enforcement authority that the Convention does not.
Our takeaway: The layered compliance obligation created by the Convention on top of the EU AI Act is a new variable for actors in this space. Its provisions on human rights and rule of law impact hold potential beyond their application to individual AI systems as new vectors for addressing broader declines in these areas. The Convention and related Huderia framework provide helpful leverage for the access to justice community.
→ European Parliament: Recommendation A10-0007/2026 (March 11, 2026)
→ Council of Europe: Framework Convention on AI, Human Rights, Democracy and the Rule of Law
→ ENSURED: Global AI Regulation and the Council of Europe Framework Convention
→ Council of Europe: HUDERIA Methodology and Model for AI Risk and Impact Assessment
61.6% of US Federal Judges Use AI. 45.5% Lack Training
A new Northwestern University Federal Courts Survey (March 2026), the first random-sample study of its kind, found that 61.6% of US federal judges use at least one AI tool in their judicial work, while 45.5% report no AI training from court administration. UNESCO’s 2024 global survey across 96 countries found similar patterns internationally, prompting updated Guidelines for AI in Courts and Tribunals launched in December 2025. The training deficit is the critical variable. Institutional infrastructure, including governance frameworks, training programs, accountability mechanisms to ensure that adoption strengthens rather than erodes judicial independence and due process is urgent in the context of other pressures on judicial bodies..
Our takeaway: The gap between judicial AI adoption and institutional preparedness is systemic legal risk in one of democracy’s most consequential institutions. Peer learning models and communities of practice may offer a near term solution as institutions move to adapt at the systems level.
→ UNESCO: Survey Uncovers Critical Gaps in AI Training Among Judicial Operator