IALS Lecture Bangkok, hosted by Chulalongkorn University, Faculty of Law, on September 8-9, 2025, in Bangkok
Ladies and gentlemen,
It is both a privilege and a pleasure to address you here in Bangkok. May I begin by expressing my gratitude to our hosts for their warm hospitality and for bringing us together in this distinguished setting.
This afternoon I wish to speak about the training of legal professionals. My particular focus will be on the training of judges – yet the questions I raise extend more broadly to legal education in its entirety.
Artificial intelligence is, without exaggeration, one of the defining developments of our time. The judiciary is no exception. What was once regarded as the most inherently human, the least automatable of endeavours – the act of judicial decision-making – is now increasingly shaped by AI.
Yet alongside the risks, there is promise. AI can enhance access to justice, assist in managing complex caseloads, and uncover patterns that might otherwise remain hidden. Used wisely, it may even sharpen the quality of human reasoning. In some respects it can act as a mirror, reflecting back to judges their own assumptions, their own habits of thought, and thereby encouraging greater transparency and self-awareness.
But this potential does not eliminate the dangers: dependency on opaque technologies, the reproduction of bias, and the more insidious risk that judicial responsibility begins to shift – from the human conscience to the algorithmic output.
To frame our discussion, I should like to consider four questions: What role can AI play in the judiciary? Why does this necessitate a renewed approach to the training of judges? Which values must remain entirely beyond compromise? And finally, what is the wider institutional and societal perspective?
Traditionally it has concentrated on doctrinal knowledge, ethics and practical skills. But if AI is to become structural, judges must also acquire digital literacy. Not to programme machines, but to comprehend enough to question their output with confidence.
Training, then, is not merely a matter of knowledge. It is about cultivating a cast of mind: critical, inquiring, cautious. The essential skill is not to have all the answers, but to keep asking the right questions. Where has this data come from? What assumptions shape the model? And what does this mean for the human beings before the court?
The values that must remain
Here we arrive at the heart of the matter. AI may serve as an adjunct, but the essence of judging does not change. Five values must remain non-negotiable.
Independence – Judges must not become reliant on systems developed by commercial providers or subject to political influence.
Impartiality – Given that AI is prone to bias, judges must be vigilant in safeguarding equality of arms.
Transparency – Judicial reasoning must remain intelligible. If AI has played a part, that fact must be disclosed.
Human dignity – Justice concerns people, not solely data. Empathy and proportionality cannot be automated.
Responsibility – The final decision must always rest with the judge; no algorithm can replace judicial conscience.
These are not only guiding principles; they are boundaries. They mark out what AI can never be permitted to assume.
It is both a privilege and a pleasure to address you here in Bangkok. May I begin by expressing my gratitude to our hosts for their warm hospitality and for bringing us together in this distinguished setting.
This afternoon I wish to speak about the training of legal professionals. My particular focus will be on the training of judges – yet the questions I raise extend more broadly to legal education in its entirety.
Artificial intelligence is, without exaggeration, one of the defining developments of our time. The judiciary is no exception. What was once regarded as the most inherently human, the least automatable of endeavours – the act of judicial decision-making – is now increasingly shaped by AI.
Yet alongside the risks, there is promise. AI can enhance access to justice, assist in managing complex caseloads, and uncover patterns that might otherwise remain hidden. Used wisely, it may even sharpen the quality of human reasoning. In some respects it can act as a mirror, reflecting back to judges their own assumptions, their own habits of thought, and thereby encouraging greater transparency and self-awareness.
But this potential does not eliminate the dangers: dependency on opaque technologies, the reproduction of bias, and the more insidious risk that judicial responsibility begins to shift – from the human conscience to the algorithmic output.
To frame our discussion, I should like to consider four questions: What role can AI play in the judiciary? Why does this necessitate a renewed approach to the training of judges? Which values must remain entirely beyond compromise? And finally, what is the wider institutional and societal perspective?
The role of AI in the judiciary
AI in the courts is no longer a matter of speculation. Allow me to offer a few illustrations.
Algorithms now conduct case law research, scanning vast databases in seconds.
Systems for file analysis are used to detect patterns in complex fraud cases.
In some jurisdictions, AI already drafts the first version of judgments in routine matters such as traffic offences.
There are also the quieter applications: predictive analytics, translation software, speech-to-text, anonymisation tools. Not headline-grabbing in themselves, but gradually reshaping the everyday reality of judicial practice.
These examples highlight the central tension. On the one hand, AI can make justice more efficient, more accessible, more consistent. On the other, it edges ever closer to the heart of judging itself – the weighing of arguments, the reaching of a reasoned decision. AI is, and must remain, a tool. But it is a tool with the capacity to slip, almost imperceptibly, into the role of co-decision-maker.
Why, then, must judicial training adapt?
AI in the courts is no longer a matter of speculation. Allow me to offer a few illustrations.
Algorithms now conduct case law research, scanning vast databases in seconds.
Systems for file analysis are used to detect patterns in complex fraud cases.
In some jurisdictions, AI already drafts the first version of judgments in routine matters such as traffic offences.
There are also the quieter applications: predictive analytics, translation software, speech-to-text, anonymisation tools. Not headline-grabbing in themselves, but gradually reshaping the everyday reality of judicial practice.
These examples highlight the central tension. On the one hand, AI can make justice more efficient, more accessible, more consistent. On the other, it edges ever closer to the heart of judging itself – the weighing of arguments, the reaching of a reasoned decision. AI is, and must remain, a tool. But it is a tool with the capacity to slip, almost imperceptibly, into the role of co-decision-maker.
Why, then, must judicial training adapt?
Traditionally it has concentrated on doctrinal knowledge, ethics and practical skills. But if AI is to become structural, judges must also acquire digital literacy. Not to programme machines, but to comprehend enough to question their output with confidence.
- The first is the danger of false neutrality. AI may appear objective, yet it inevitably mirrors the biases contained in its data.
- The second is responsibility. No matter how advanced the tool, the judge remains accountable; AI can never substitute for judicial reasoning.
- The third is public trust. Citizens must be able to believe – and to see – that their case is heard and determined by a human judge, not by a machine.
Training, then, is not merely a matter of knowledge. It is about cultivating a cast of mind: critical, inquiring, cautious. The essential skill is not to have all the answers, but to keep asking the right questions. Where has this data come from? What assumptions shape the model? And what does this mean for the human beings before the court?
The values that must remain
Here we arrive at the heart of the matter. AI may serve as an adjunct, but the essence of judging does not change. Five values must remain non-negotiable.
Independence – Judges must not become reliant on systems developed by commercial providers or subject to political influence.
Impartiality – Given that AI is prone to bias, judges must be vigilant in safeguarding equality of arms.
Transparency – Judicial reasoning must remain intelligible. If AI has played a part, that fact must be disclosed.
Human dignity – Justice concerns people, not solely data. Empathy and proportionality cannot be automated.
Responsibility – The final decision must always rest with the judge; no algorithm can replace judicial conscience.
These are not only guiding principles; they are boundaries. They mark out what AI can never be permitted to assume.
The role of judicial training
How, then, should training give effect to these demands? On one level by knowledge and skills: practical modules with AI tools; interdisciplinary collaboration with data scientists and ethicists; case-based exercises in which judges encounter AI in mock trials.
On another level by reflection and culture: encouraging judges to be critical without being dismissive; to establish boundaries; and to return, again and again, to the central question – what does this mean for the human being in the courtroom?
Among the competences required, I would highlight five:
Recognising AI’s limits – knowing what it can and cannot achieve.
Engaging with scepticism – probing data, assumptions and methods instead of accepting outputs uncritically.
Keeping sight of human perspectives – remembering that justice is about lives, not efficiency.
Weighing ethical dilemmas – balancing privacy, fairness and autonomy.
Reviewing AI against law and rights – ensuring every use is consistent with legality and fundamental rights.
Alongside these come related skills: detecting bias, learning from comparative practice, sharpening one’s reasoning by contrast with AI, and shaping responsible use through dialogue.
But having such objectives written down is not enough. If we are serious about judicial responsibility and public trust, AI training cannot remain optional. It must become structural. It must become mandatory. Only then can we ensure that every judge, not merely the interested few, is equipped to engage with AI critically, responsibly and in full awareness of its implications.
How, then, should training give effect to these demands? On one level by knowledge and skills: practical modules with AI tools; interdisciplinary collaboration with data scientists and ethicists; case-based exercises in which judges encounter AI in mock trials.
On another level by reflection and culture: encouraging judges to be critical without being dismissive; to establish boundaries; and to return, again and again, to the central question – what does this mean for the human being in the courtroom?
Among the competences required, I would highlight five:
Recognising AI’s limits – knowing what it can and cannot achieve.
Engaging with scepticism – probing data, assumptions and methods instead of accepting outputs uncritically.
Keeping sight of human perspectives – remembering that justice is about lives, not efficiency.
Weighing ethical dilemmas – balancing privacy, fairness and autonomy.
Reviewing AI against law and rights – ensuring every use is consistent with legality and fundamental rights.
Alongside these come related skills: detecting bias, learning from comparative practice, sharpening one’s reasoning by contrast with AI, and shaping responsible use through dialogue.
But having such objectives written down is not enough. If we are serious about judicial responsibility and public trust, AI training cannot remain optional. It must become structural. It must become mandatory. Only then can we ensure that every judge, not merely the interested few, is equipped to engage with AI critically, responsibly and in full awareness of its implications.
Practical roadmap
Principles alone are not enough; if we wish to safeguard both independence and public trust, we must translate them into practice. Let me therefore suggest a practical roadmap.
First, curriculum design. AI must be embedded in the core curriculum – not as an optional module, but as a compulsory component. This should combine three layers: basic technical literacy (for example, how algorithms process data), legal evaluation (linking outputs to legality and fundamental rights), and ethical reflection (discussing the limits of human versus machine reasoning). Judges must learn not only how to use AI tools, but also when to refrain from them.
Second, independent development. Training content must remain under the authority of the judiciary itself. Dialogue with academics, technologists and ethicists is essential, but neither governments nor commercial providers should be allowed to set the standards. A practical safeguard could be the establishment of an independent judicial training board that evaluates external input against constitutional and ethical principles.
Third, continuous practice. Training must not end with initial courses. Judges should regularly engage with AI through mock trials where human and algorithmic reasoning are compared, peer sessions to reflect on practical dilemmas, and cross-border exchanges of experience. In some countries this could take the form of annual refresher workshops, ensuring that digital literacy becomes a habit of reflection throughout a judge’s career.
Principles alone are not enough; if we wish to safeguard both independence and public trust, we must translate them into practice. Let me therefore suggest a practical roadmap.
First, curriculum design. AI must be embedded in the core curriculum – not as an optional module, but as a compulsory component. This should combine three layers: basic technical literacy (for example, how algorithms process data), legal evaluation (linking outputs to legality and fundamental rights), and ethical reflection (discussing the limits of human versus machine reasoning). Judges must learn not only how to use AI tools, but also when to refrain from them.
Second, independent development. Training content must remain under the authority of the judiciary itself. Dialogue with academics, technologists and ethicists is essential, but neither governments nor commercial providers should be allowed to set the standards. A practical safeguard could be the establishment of an independent judicial training board that evaluates external input against constitutional and ethical principles.
Third, continuous practice. Training must not end with initial courses. Judges should regularly engage with AI through mock trials where human and algorithmic reasoning are compared, peer sessions to reflect on practical dilemmas, and cross-border exchanges of experience. In some countries this could take the form of annual refresher workshops, ensuring that digital literacy becomes a habit of reflection throughout a judge’s career.
Conclusion
AI will play an ever greater role in the administration of justice. That is precisely why we must invest in training.
The judge of the future will not be the one who replaces AI, but the one who engages with it critically, while never losing sight of the values that define justice.
The same applies to legal education more broadly. Tomorrow’s lawyers, prosecutors and policymakers must all learn to scrutinise technology, to weigh its promises against its risks, and to root their practice in the fundamental values of the rule of law.
In the end, the rule of law depends not on machines but on people – across professions, across borders, across societies.
And it is only by reflecting together, as we are doing here today, that we can ensure that justice in the age of AI remains not only efficient, but also human – and just.
Thank you.
AI will play an ever greater role in the administration of justice. That is precisely why we must invest in training.
The judge of the future will not be the one who replaces AI, but the one who engages with it critically, while never losing sight of the values that define justice.
The same applies to legal education more broadly. Tomorrow’s lawyers, prosecutors and policymakers must all learn to scrutinise technology, to weigh its promises against its risks, and to root their practice in the fundamental values of the rule of law.
In the end, the rule of law depends not on machines but on people – across professions, across borders, across societies.
And it is only by reflecting together, as we are doing here today, that we can ensure that justice in the age of AI remains not only efficient, but also human – and just.
Thank you.
Marc de Werd, judge at Amsterdam Court of Appeals
Geen opmerkingen:
Een reactie posten