The Brief

As AI transformation accelerates in the legal sector, it is unfolding against historic declines in democracy and rule of law. The same institutions that AI governance and justice depend on are under pressure, yet these conversations often remain siloed. This issue scans how democratic erosion, AI competence in justice systems, and AI governance intersect—highlighting flaws in model design and the emergence of new treaty frameworks that can support a response.

Two major reports document the scale of democratic and rule‑of‑law decline. At the same time, the judiciary faces external threats to its independence and an AI competency gap at the precise moment its role matters most. New research argues that prevailing approaches to building “AI for democracy” are fundamentally flawed and proposes a new methodology. And the first binding international AI treaty has entered into force, pairing AI‑specific obligations with a human rights, democracy, and rule‑of‑law impact assessment framework that offers an integrated response to the current context.

On the Radar: Strategic Signals

92 Autocracies, 87 Democracies: The Operating Environment for AI Transformation and Governance in 2026

V-Dem released “Unraveling the Democratic Era,” its 2026 Democracy Report. V-Dem’s research concludes that the world now has 92 autocracies and 87 democracies and that 74% of the world population —6 billion people— live in autocracies as of the end of 2025. The backsliding is not confined to countries already classified as autocratic: a record 41% of the world’s population now live in countries actively autocratizing. Only 7% of the global population, 0.6 billion, live in liberal democracies. The report finds that United States lost its status as a liberal democracy for the first time in over 50 years. US civil rights, equality before the law, freedom of expression, and media freedom were measured as being at the lowest levels in 60 years. V-Dem also highlights the compounding consequences of these findings noting that autocratic countries now possess the economic and political power to reshape international organizations, norms, and trade. The report paints a picture of historic, multi-faceted systemic legal risk.

The Civil Liberties Union for Europe also released its 2026 Liberties Rule of Law Report focusing on the state of rule of law in the European Union. Drawn from research across 22 EU member states, the report provides a detailed snapshot on the rule of law in EU member states in the context of the V-Dem findings.

Key findings include:

  • Gaps in enforcement of judgments are widespread. The report specifically points to over 650 European Court of Human Rights judgments awaiting full implementation.
  • Rule of law is stagnating and/or declining. 61% of member states who received EU recommendations on steps to advance the rule of law have not shown progress or lack visible progress in areas such as justice, anti-corruption, media freedom, and checks and balances. An additional 13% of member states actively backsliding on rule of law measures.
  • Legislative environments are compounding enforcement gaps. The report highlights fast-track, non-consultative law-making among EU institutions, including the Digital Omnibus, which proposes changes to the Artificial Intelligence Act, GDPR, and ePrivacy Directive. Combined with the report’s findings on erosion of civil liberties, failures to ensure safeguarding of fundamental rights, and sustained campaigns to undermine watchdogs organization, the report describes legal uncertainty across a spectrum from policy-making to enforcement.
  • Surging repression of civil society, weakening a key constituency for advancing the rule of law, providing oversight and ensuring institutional accountable.
  • Declining public understanding that the rule of law matters posing another dimension of uncertainty and risk.

Our takeaway:  Democratic and rule of law erosion is creating systemic legal risk that implicates AI transformation and governance. AI governance frameworks and new legal sector models are being built on foundations that are actively eroding, creating uncertainty, potentially undermining their effectiveness, and heightening the broader imperative for context aware strategies and design. At the same time, AI governance and other AI infrastructure building holds potential to address current instability. With strategic guidance and intentional design and monitoring, this potential includes serving as a mechanism for strengthening the rule of law and democratic ideals. Mitigating risks and capturing opportunities requires intentionality. Policy makers, boards, and other leaders should adopt a holistic frame and incorporate the broader context in design and strategy.


V-Dem Institute: Democracy Report 2026 (March 18, 2026)

Civil Liberties Union for Europe: Liberties Rule of Law Report 2026

61.6% of US Federal Judges Use AI. 45.5% Lack Training

A new Northwestern University Federal Courts Survey (March 2026), the first random-sample study of its kind, found that 61.6% of US federal judges use at least one AI tool in their judicial work, while 45.5% report no AI training from court administration. UNESCO’s 2024 global survey across 96 countries found similar patterns internationally, prompting updated Guidelines for AI in Courts and Tribunals launched in December 2025. Coupled with political attacks on judicial independence, these findings reveal a judiciary under dual pressure: external threats to authority and internal gaps in technical competence and governance.

For justice systems, this dual pressure is a structural risk. Courts are being asked to safeguard democratic order and the rule of law at the precise moment their own epistemic and technological foundations are shifting. Without deliberate investment in AI literacy, governance frameworks, and accountability mechanisms tailored to the judiciary, there is a real danger that AI transformation will amplify existing vulnerabilities

In the near term, peer learning models and communities of practice among judges and court administrators may be one of the fastest ways to strengthen competence and share emerging guardrails while formal standards and institutional infrastructure catch up.


Our takeaway: The gap between judicial AI adoption and institutional preparedness is systemic legal risk in one of democracy’s most consequential institutions. AI literacy and governance should be viewed as core to protecting the rule of law, not optional professional development. Judicial leaders and administrative bodies should move quickly to establish minimum AI competence standards, shared guardrails, and low-barrier near term solutions such as peer learning and communities of practice as institutions move to adapt at the systems level.


Northwestern University: Artificial Intelligence in Federal Courts, A Random-Sample Survey of Judges (March 2026, NYC Bar Association)

UNESCO: Survey Uncovers Critical Gaps in AI Training Among Judicial Operators

Why Current Approaches to Building AI for Democracy Fall Short

Gillian K. Hadfield, Rakshit Trivedi, and Dylan Hadfield-Menell, and the Knight First Amendment Institute published new research that identifies a central design flaw in how the AI field has approached building for democracy. Their report finds that current technical approaches treat democratic alignment as a parameter problem and focus on eliciting values, encoding them as fixed constraints, and pre-loading them into models. It notes that constitutional AI, democratic input mechanisms, and law-following AI all share this assumption. AI agents lack what the authors term ‘normative competence.’ As these agents increasingly act as economic proxies, for example, making decisions on hiring, procurement, and compliance, they lack the capacity to navigate a dynamic normative environment.  The authors describe democracy as a complex adaptive system constituted by the voluntary cooperation of people and institutions who enforce shared norms through their daily decisions. They further note that democratic institutions, such as independent courts, do not merely declare rules; rather, they function as spaces where civic relationships play out and where the decentralized enforcement of rules takes place. Agents lack the computational capacity to detect this dynamic normative environment, predict its trajectory, and participate in the distributed enforcement mechanisms that democratic order depends on. This gap creates risk that agents will erode enforcement systems and incentives that sustain democracy.

The authors lay out a technical research agenda to address current gaps in alignment. They propose:

  • Building normative competence into individual AI agents, including by developing the capacity to read and adapt to evolving social norms at inference time rather than relying on pre-trained values.
  • Constructing new digital classification institutions, what the authors call Model Specification Institutions, that are legible to AI, operate at AI speed and scale, and are governed by democratic processes rather than by AI developers. These institutions would provide real-time normative guidance through transparent APIs, grounded in citizen assemblies, expert panels, and digital juries.

Our Takeaway: This research highlights critical gaps whose importance is elevated by the broader context of democratic erosion around the world. Without action to address the gaps, there is risk that agentic AI will contribute to and potentially amplify erosion. The authors’ proposed antidote: building dynamic normative competence in AI agents and creating democratic institutions that operate at AI speed offers a potential alternative. Policy leaders and builders should note the flaws in current approaches and engage with the proposed agenda and adopt measures to ensure that design and governance approaches mitigate these risk.


Knight First Amendment Institute: Building AI for the Democratic Matrix (Hadfield, Trivedi, Hadfield-Menell, March 3, 2026)

The First Binding International AI Treaty Enters Force with a Human Rights Impact Assessment Framework

On March 11, 2026, the European Parliament backed EU ratification of the Council of Europe Framework Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law. The Convention entered into force on November 1, 2025, and has been signed by 37 countries, including the EU representing its 27 member states, the United Kingdom, Canada, the United States, Japan, Ukraine, and Uruguay.

The treaty applies to both public bodies and private sector organizations and calls for parties to require that entities carrying out AI-related activities conduct iterative risk and impact assessments on actual and potential effects on human rights, democracy, and the rule of law. A dedicated tool, HUDERIA (Human Rights, Democracy, and Rule of Law Impact Assessment), was developed alongside the Convention to operationalize this requirement. HUDERIA provides a four-element methodology: context-based risk analysis mapping risk factors across application, design, and deployment contexts; stakeholder engagement; structured risk and impact assessment evaluating severity, scope, reversibility, and probability; and a mitigation plan. The methodology covers 17 defined areas of concern, from physical integrity and privacy to judicial independence and democratic participation.

The implications in the area of access to justice are notable. AI systems used in legal aid delivery, court administration, case management, and judicial decision support fall within the treaty’s scope. The obligation to assess impacts on equality before the law, access to remedy, and the independence of legal processes applies to both governments deploying these systems and the private companies building them.

The Convention sits on top of the EU AI Act, not beside it. Legal tech companies face layered obligations: the EU AI Act’s technical risk classification requirements and the Convention’s human rights and rule of law impact assessments operating through different legal mechanisms on the same products.

The ENSURED research program, which tracks the Convention’s implementation, notes that its value will depend on whether ratifications extend beyond Europe and whether follow-up mechanisms build accountability over time. The EU AI Act carries direct enforcement authority that the Convention does not.


Our takeaway: The Convention is the first binding international instrument that explicitly requires AI impact assessment against human rights, democracy, and rule of law standards. While its ratification is pending in many jurisdictions, the treaty’s strategic framework and explicit treatment of these areas holds promise for helping promote and guide rights-centered systems and products. The associated HUDIERA methodology fills a gap in practical guidance and can aid leaders, developers, and advocates in design, governance, and advocacy.


European Parliament: Recommendation A10-0007/2026 (March 11, 2026)

Council of Europe: Framework Convention on AI, Human Rights, Democracy and the Rule of Law

ENSURED: Global AI Regulation and the Council of Europe Framework Convention

Council of Europe: HUDERIA Methodology and Model for AI Risk and Impact Assessment