Menu

Filter by
content
PONT Data&Privacy

0

Responsible and innovative: AI for a just justice system

Artificial Intelligence (AI) is radically changing the world. It will impact how the Judiciary operates, adjudicates and supervises. Litigants will use AI, AI (or AI-generated content and decisions) will be the subject of litigation, and AI will impact evidence. AI can also provide solutions to the major issues facing the Judiciary such as the structural judge shortage, access to justice and (too) long processing times. At the same time, we should not close our eyes to challenges such as the protection of fundamental rights including privacy, non-discrimination and (judicial) autonomy. This strategy describes how the Judiciary views AI and how it intends to deal with this development.

Rechtspraak.nl 10 June 2025

Artificial Intelligence is everywhere. AI is a systems technology and, like the Internet, will radically change our society. Regardless, the judiciary must relate to it. On the one hand, AI manifests itself in casework. How do we deal with AI-generated litigation and evidence? What effect does AI have on evidence, equality of arms? On the other hand, AI offers opportunities to the Judiciary itself. How do we take advantage of them? How do we deploy AI on the big issues facing the Judiciary?

We see short-term opportunities for improving labor-intensive administrative and logistical processes such as rostering and scheduling, pseudonymizing statements, and improving control of processes via predictive models for inflow and routing. Or as a tool for preparing news releases, letters and presentations, etc. We also see opportunities in improving contact with society/citizens; for example, by automatically summarizing judgments at B1 language level and information provided by chatbots in natural language. But also in the legal field, we think we can improve labor-intensive processes with the help of AI, such as term checks, finding the right case law and detecting discrepancies within the Accounts and Responsibilities reports of Supervision. And we see a major role for AI in supporting the analysis, structuring and summarizing of large, complex files and, for example, the preparation of draft judgments.

The use of AI technology (both by litigants and the Judiciary itself) should be consistent with rule of law requirements for due process, access to justice, and judicial independence and impartiality. This seems like an open door but it is not. Technology is not value-free and, moreover, by concentrating knowledge, capital and data, it is a factor of power. We must ensure that the judicial domain remains immune from unwanted technological influences. A tool for this is the European AI Regulation. This regulation distinguishes between low-risk and high-risk applications and sets all kinds of obligations when using AI, depending on the risk profile. As the judiciary, we follow this regulation as a matter of course.

The judiciary constitutes the third state power and plays a crucial role in the democratic rule of law. This power must remain independent both from the two political state powers: the legislature and the executive, and from others. Artificial Intelligence (AI) has the potential to influence judicial autonomy directly and indirectly. Both the European Union in its AI Regulation and the Council of Europe in CCJE Opinion 26 have explicitly called for measures to ensure that AI does not improperly influence judicial judgment.

The ambition of the judiciary is to recognize AI risks and deal with them appropriately. At the same time, AI offers a powerful tool to address major challenges facing the judiciary. However, the use of AI takes place exclusively within the framework of the rule of law, with human control, transparency and ethical safeguards at its core.

The judiciary sees a lot of added value in the use of AI in low-risk processes and therefore focuses on this. This is done responsibly and in line with the core values of the judiciary. Through a 'learning-by-doing' approach, knowledge and skills are built up, technically, legally and organizationally. This lays a solid foundation to meet the stringent requirements for high-risk AI in the future. Until then, the use of AI in high-risk processes remains ruled out in order to fully guarantee the independence and reliability of the judiciary.

The judiciary plays a crucial role in the application - but thus also further interpretation - of the law in an AI era through the concrete cases before it. The judiciary is an essential actor in shaping legal protection, legal certainty, etc. in a democratic constitutional state in which AI is increasingly permeating the capillaries (including providing 'counterbalance'). We use a 10-point plan to realize this strategy:

  1. We are developing a balancing framework for the protection of the judiciary as an independent third state power, judicial autonomy, fundamental fundamental rights (w.o.w. Article 6 ECHR), our core values (w.o.w. values relevant to AI such as sovereignty and sustainability), ethics (e.g., through the use of the IAMA, which stands for the Human Rights and Algorithms Impact Assessment), and the like;
  2. We set up independent oversight for appropriate use of that balancing framework so that not only the question "'may' we do it?" is asked, but also about the question "'do' we want it?"
  3. Together with the profession, we develop visions, regulations, agreements or whatever else is needed to adequately deal with AI in the casuistry. We are in dialogue about this with relevant stakeholders such as the NOvA and chain partners;
  4. We invest in an information, development and training program towards dealing with AI in the caseload of judicial practice. We also do this to recognize, develop, use and ensure proper deployment of new AI applications;
  5. We invest and actively participate in public initiatives such as joint action with chain partners such as J&V datalab, NFI and GPT-NL. We also cooperate in a European context (ENCJ, CCJE), with civil society parties and science;
  6. We continue to experiment, partly on the initiative of the courts and ideas from within the courts. Where possible, we scale up quickly to national implementation and share our experiences;
  7. We are investing in a data and AI platform that can scale up beyond experimentation to actually add value for courts and litigants;
  8. We are working to continuously improve the quality and availability of justice data, recognizing that these are prerequisites for the use of AI. We are developing a framework for responsible data sharing, taking into account privacy, availability, integrity, and confidentiality of data;
  9. We focus particularly on low-risk applications. Through 'learning-by-doing' we are increasingly able to fill in the preconditions for applications that go more in the direction of high risk. However, we exclude judge-made applications (such as robotic judges);
  10. We ensure adequate control and monitoring of AI projects and AI applications. We are compliant with the AI regulation. We are transparent about the deployment of AI. We record deployments of AI in a publicly accessible algorithm/AI registry. We regularly review and update our AI strategy in line with technological and societal developments.

Share article

Comments

Leave a comment

You must be logged in to post a comment.

KENNISPARTNER

Martin Hemmer