Menu

Filter by
content
PONT Data&Privacy

0

Digitization and AI as building blocks for a future-proof judiciary

The rule of law is a fundamental part of our society. Access to justice is crucial for citizens, entrepreneurs, and organizations. In 2022, PwC conducted research into solutions for future-proof justice and law enforcement.

PwC January 14, 2026

Confidence in the rule of law is a topic of discussion, just as it was in 2022. The role of digitization in the work of the government is increasingly becoming part of these discussions. In addition to the advantages, inthis era of digital transformation, the resilience of the rule of law and the extent to which it undermines important values such as privacy and national security are also being debated.

This is the second blog in a series in which we discuss our proposed solutions from 2022 and share current examples from practice. We conclude with a number of opportunities that we see for the future. In this blog, we delve into digitization and Artificial Intelligence (AI) as building blocks for a future-proof administration of justice. We do this by focusing on the judiciary. To this end, we spoke with Jos Smits, Program Manager AI at the Council for the Judiciary. The Judiciary is an umbrella organization for courts and tribunals, among others, and ensures that judges can do their work properly.

The balance between innovation and constitutional safeguards

The use of innovative technology, such as AI, has increased significantly in recent years within the judicial chain. Digitization can offer opportunities to shorten processing times, improve the exchange of information between chain partners, and increase access to justice. The digitization of case files and the use of video links for preliminary hearings and court sessions are concrete examples of this. In addition, AI is increasingly being used to intelligently search case law, detect and combat suspicious activities by analyzing large amounts of data, and take over administrative tasks.   

But these innovations also entail risks. Fundamental principles of the rule of law, such as the right to a fair trial and independent judiciary, must not be compromised. Algorithms can (unconsciously) adopt human biases, orinformation from AI chatbots can be presented as factswithout being verified. When summarizing case files, language models may miss important nuances or, in the worst case, eveninvent evidence orcase law. These risks undermine confidence in the rule of law and must be prevented. 

Responsible innovation in the judiciary with low risk and high impact

The judiciary is actively working on digitization and AI, while keeping a close eye on the risks involved. In June 2025, the judiciary launched the nationalArtificial Intelligence program, with the aim of developing AI applications that support employees and improve the quality of their work. The focus is on developing AI applications in low-risk processes, such as summarizing and pseudonymizing judgments. The latter means that AI recognizes personal data and other sensitive information and makes suggestions for anonymizing it. The suggestions are assessed by an employee and then published. In addition, the judiciary is working on developing its own generative AI application called RechtspraakGPT. This application provides support in drafting texts such as news reports, letters, and presentations. 

For Jos Smits, AI Program Manager at the Dutch judiciary, experimenting with AI is only valuable if it actually contributes to practice. "Experimentation should never be an end in itself," he emphasizes. "We now know that AI works. The essence is that an experiment must lead to concrete value for the judiciary, and that it takes place in an environment where the results can actually be applied operationally. Otherwise, you mainly create disappointment and the impact will be negligible."

AI in the justice system is inevitable: the entire chain must prepare itself 

"It's not just our own use of AI in daily practice that requires attention," says Smits. "Equally important, and currently not given enough attention, is learning how to deal with the AI that the judiciary is confronted with every day." Think of deepfake videos or created photos that are submitted as evidence. In addition, the police and the Public Prosecution Service are increasingly using AI tools in investigations and prosecutions. This directly affects the transparency and traceability of the compilation of a case file and thus the administration of justice. This could fundamentally change the position of the judge and the rule of law.

How do you prove that a photo is real and not generated by AI? What does it mean for equality before the law if one party can afford a lawyer with advanced AI tools and the other cannot? Such challenges can only be solved at the chain level, says Jos Smits. This calls for an open discussion in the short term in which all parties are aware of the long-term consequences of AI. Consequences that may be more fundamental to the position of the judiciary than we can currently foresee.

Knowledge as the key to change

Learning to use AI effectively starts with increasing AI literacy. Smits sees both enthusiasts within the judiciary ("why can't I use an AI chatbot for my daily work") and employees who are cautious or particularly critical. Knowledge is essential for both groups: "For those who want to move quickly, it is important to realize that data management and security must be in order before sensitive and personal data can be used. For the group that is cautious and concerned, it is important to show how AI can create space for human contact."

The judiciary is therefore investing in awareness-raising through lectures, workshops, and practical examples. Employees are learning not only how to use AI themselves, but also how to recognize AI use by others. The aim is to make employees feel comfortable using AI responsibly and able to assess both the opportunities and risks.

Three recommendations for future-proof digitization in the public domain

Based on our experiences at the judiciary, we have formulated three recommendations for organizations that want to digitize and apply AI:

  1. Increase AI literacy within an organization
    AI and digitization can only truly add value if employees understand what the technology entails, what its possibilities are, and where its limitations lie. This means not only offering training courses and workshops, but also creating space for discussing concerns, sharing practical experiences, and redesigning processes together. Successful implementation requires a strategy that takes the human factor and the change process into account.
  2. Only experiment if it actually contributes to practice
    Experimenting with AI only makes sense if it leads to concrete value in daily practice. Avoid pilots or experiments becoming an end in themselves. Start with applications that tie in with existing processes and whose results can actually be implemented. This will prevent disappointment and ensure that the energy and enthusiasm of employees are converted into sustainable change.
  3. Keep an eye on the long-term impact
    Don't be seduced by quick wins or the hype surrounding AI. Experience shows that we often overestimate the short-term opportunities, while underestimating the long-term consequences. Therefore, remain critical of the structural effects of digitization and AI on the organization, the rule of law, and society. Think ahead, consider different perspectives, and ensure that ethical and legal safeguards are included from the outset. Involve the entire chain in order to arrive at sustainable solutions.

This article was written by Nina van der Voort (Senior Manager at PwC) and Liesbeth van der Maat (Partner People & Organization at PwC).

Share article

Comments

Leave a comment

You must be logged in to post a comment.