.

.
‘Abduction of Europa’ (Rembrandt Harmensz. van Rijn, Amsterdam - 1632 - fragment)

vrijdag 11 april 2025

The Autonomy of Judges in the Algorithmic Society



Presentation for AlgoSoc International conference - The Future of Public Values in the Algorithmic Society​​​​​​​​​​​​​, 10 & 11 April 2025, Amsterdam

'The real question is not what AI can do, but what the judiciary must remain. Independent, grounded in the rule of law, and driven by human values."

Good morning, everyone, and thank you to the organisers for inviting me to speak today. 

I am a judge in criminal law at the Court of Appeal in Amsterdam and a part-time professor at University of Amsterdam. I also represent the Dutch judiciary in the Consultative Council of European Judges, or CCJE. In that capacity, I contributed to an opinion on the use of AI in the judiciary, examining both the opportunities and the challenges it presents. I look forward to sharing some insights from that opinion with you.

Today, I’d like to cover three key points: First, I’ll discuss the current role of AI in the Dutch judiciary. Next, I’ll reflect on Opinion No. 26 from the CCJE, which emphasises the importance of judicial autonomy. Finally, I’ll outline the necessary conditions for the responsible use of AI by judges.

Reality

Let me get straight to the point: how operational is AI in Dutch courts? The reality is mixed, depending on your perspective. At present, AI is not yet part of day-to-day court practice. However, we are piloting AI for anonymising court rulings, and we expect that to become operational shortly.

At the same time, the use of AI tools such as ChatGPT by court staff is strongly discouraged. This is because the judiciary is classified as a high-risk sector under the AI Regulation. As it stands, the Dutch judiciary does not yet meet the requirements of the AI Act.

Despite the restriction, ChatGPT is accessed by judges and court staff 3,000 to 4,000 times per month on official devices. These are not all unique visitors. Some visitors are heavy users In some cases, judges are openly experimenting with AI in their rulings. In June 2024, a district judge used ChatGPT to assess the lifespan of solar panels in a civil case, rather than consulting an expert. and just last month, the Rotterdam District Court ran an experiment in which AI helped draft sentencing considerations a criminal judgment.

Both of these were high-risk uses of AI, yet no action has been taken against them in the Netherlands.
In contrast, Portugal’s High Council for the Judiciary is investigating a possible disciplinary offence following a Lisbon Court of Appeal ruling in which AI may have been used to cite non-existent laws and precedents.
 
AI Strategy for the Courts

The Dutch judiciary has taken a cautious approach to implementing AI — and rightly so. That said, considerable work is taking place behind the scenes. Especially as the AI Act requires national oversight to be in place by August this year. In January, the judiciary’s AI strategy was published, setting out two key principles: First, AI holds great potential for improving judicial work. Second, this potential is matched by serious concerns about responsible use and the need for robust safeguards.

Opportunities

AI is proving valuable in the judiciary — but primarily as a supportive tool. It is especially useful for administrative and logistical tasks, such as scheduling hearings, anonymising decisions, and streamlining workflows.

AI also plays an increasing role in public communication and information services. It can automatically generate summaries of judgments and make legal information more accessible to citizens, journalists and legal professionals. This enhances transparency and accessibility in the justice system.

In terms of content, AI offers legal and substantive support. It can assist with case law research, manage deadlines, and support complex EU and international matters. In this sense, AI functions as a “smart assistant” — helping to ease the workload while leaving the final responsibility to the judge.

AI is also useful in analysing complex case files. It can structure, process and summarise large volumes of information. In some cases, it may even assist — to a limited extent — in drafting judgments. But here, caution is essential. When it comes to interpreting the law, balancing interests, and applying legal principles to individual cases, the human judge remains indispensable.

I view AI as a valuable sparring partner . It can present counterarguments and expose weaknesses in reasoning, contributing to a more thorough analysis. By testing legal arguments through AI, lawyers can strengthen their reasoning and anticipate objections. But AI must be seen as a tool — not a substitute for human judgment.

Concerns

The AI strategy for the courts also recognises the associated risks. "The judiciary is the third branch of government and plays a crucial role in a democratic constitutional state. It must remain independent — not only from the legislative and executive branches, but also from other external influences."

This quote is directly drawn from Opinion No. 26 of the CCJE. The Consultative Council of European Judges is a body of the Council of Europe composed exclusively of judges. Each year, it publishes an opinion on a topic central to the judiciary’s core values. In 2023, the theme was “Moving forward: the use of assistive technology in the judiciary.”

A key concern in Opinion 26 is how judges can safeguard their autonomy in an age of emerging technologies. Judicial autonomy and independence must always come first: decisions must be made by judges, not technology. Technology must not erode a judge’s critical thinking. This is a clear warning to protect judicial autonomy.

"The use of technology must, above all, respect the nature of the judicial process. Technology must not step into the realm of justice. Technology must not discourage or impede the critical thinking of judges as this can lead to stagnation of legal development and an erosion of the system of legal protection. Technological tools must therefore respect the process of judicial decision-making and the autonomy of judges." CCJE Opinion 26 (2023) Moving forward: the use of assistive technology in the judiciary

 How is Autonomy Threatened?

Opinion 26 outlines how AI can undermine judicial autonomy. First, AI is not design-neutral. It depends on the data it is trained on, which makes it vulnerable to bias and manipulation. Without a firm grasp of legal context, AI may disrupt rather than support judicial work. Many data engineers lack the legal expertise to create systems aligned with judicial practice. Judges must therefore be involved in the design process, to ensure AI delivers reliable and just support.

Opinion 26 also stresses that AI is not merely a technical issue — it is political. A handful of major companies, such as Microsoft and OpenAI, determine how AI is developed and what risks are considered acceptable. They shape both the technical parameters and the ethical standards of AI systems. Commercial interests may take precedence over transparency, accountability and fairness.

The current geopolitical climate increases AI’s vulnerability in the judiciary. Rising political polarisation and mistrust in independent institutions heighten the risk of AI being exploited for political ends. Growing dependence on commercial tech companies, combined with geopolitical tensions, makes judicial AI systems more susceptible to cyberattacks and manipulation. All of these factors increase the risk that AI will undermine justice — serving political agendas rather than the rule of law.

A final concern is the loss of judicial skill. AI tools such as ChatGPT provide fluent, seemingly reliable responses — but relying on them blindly is dangerous. AI is only as good as its data and algorithms. Opinion 26 warns that AI may erode a judge’s evaluative skills, shifting legal reasoning from the judge to the machine. Many AI systems operate as a “black box”, making it hard for judges to evaluate their reliability. This poses a threat to judicial autonomy and legitimacy. If judges become overly reliant on AI, the core of legal reasoning risks shifting away from human deliberation — a fundamental challenge to the rule of law. AI must assist, not replace, judicial decision-making.

Recommendations

To ensure AI supports justice rather than undermining it, several steps are essential. Judges must be involved from the outset in the design of AI systems. This ensures those systems reflect judicial realities and uphold the rule of law. 
Judges also need continuous education on AI — understanding what it can and cannot do, as well as the associated risks. 
Clear boundaries must be drawn. AI should assist judges, not replace them. 
Only transparent and explainable systems should be used — systems we can understand and trust. 
Proper oversight is critical. We must invest in internal structures to monitor how AI is used within the judiciary. 
But robust public oversight is just as important. AI in the courts must serve justice — not commercial or political interests. 
And above all: we must protect human judgement. AI can be a powerful tool, but it is the judge who must remain the ultimate decision-maker.

In Conclusion

While AI presents significant opportunities, it also brings serious risks — especially given the vulnerability of the judiciary as the third branch of government. Irresponsible use by judges could permanently undermine public trust in the courts. 

The real question is not what AI can do, but what the judiciary must remain. Independent, grounded in the rule of law, and driven by human values. In an era of rapid technological and geopolitical change, it is vital that we now develop the institutional awareness and oversight needed to safeguard these fundamental principles.

Thank you very much for your attention.

Marc de Werd, 11 April 2025

Geen opmerkingen:

Een reactie posten