Menu

Filter by
content
PONT Data&Privacy

0

Does AI take over in supervision?

At DNB, we have always loved intelligence. But now that artificial intelligence is increasingly thinking, reading and even writing along, the question arises: how do you use AI responsibly as a supervisor, without losing your expertise or confidentiality? In this blog from the DNB, Divisional Director of Supervision Maarten Gelderman shines his light on the role of AI within his organization.

De Nederlandsche Bank August 1, 2025

At DNB, we have a thing about intelligence. Not so long ago, we asked applicants for their high school marks. If you didn't have at least an eight for Mathematics B, your chances of getting an interview were considerably reduced. And, of course, we preferred to have a PhD in econometrics as a new employee. Of course, I am exaggerating and we no longer ask for grades. But we do still do assessments for certain positions and have applicants tested for intelligence. And we prefer candidates who score above the academic average.

New hobby

Right now we have a new hobby. We are a little crazy about artificial intelligence (AI). By the way, we don't seem to be the only ones in that ... But it's not easy as a regulator yet. We can't just use an AI tool like ChatGPT. After all, it learns from our questions and unintentionally we would be sharing secret information, information about financial institutions or what we think of them.

That problem is now solved by working with a shielded AI environment, or actually more than one. We use it, for example, to search complex and extensive applications from institutions in a smart way. Think of a pension fund's entry decision or a bank's model documentation. But there are plenty of plans on the table to do more. Hopefully, we will soon be able to inform an application before it has even been submitted that our AI tool thinks it is incomplete or contains errors. It will also probably work to have AI generate drafts of formal decisions that are already 80% complete. Or conversely, automate quality control of the final product in part with AI.

Questions may

That too still has challenges. Searching or querying a set of documents sounds easy, but in a first test on our own Open Book Supervision-the website with supervisory information-the AI did not distinguish between our own policy and consultation responses. Still annoying when you then ask the tool what the DNB policy on X is and get as an answer a proposed, but not adopted, relaxation of that policy.

Hallucinating

Things can get even worse when the AI tool also runs out of ideas and starts hallucinating. Infamous is the example of the lawyer in the US who had generated his pleading with ChatGPT, including references to unfortunately non-existent case law. Now there are tricks to reduce the chances of this happening, but completely ruling out the possibility of such a thing happening is impossible in a non-deterministic system. And in the beautiful prose such a tool generates, you quickly read over the errors.

Natural intelligence

That's why we make sure that between what the model generates and what we ultimately do, there is always a human to intervene. And that human component helps prevent errors. To use AI well, you have to know something about AI, but much more about your own profession. Getting this right is also not yet a given. The people who now use the output of an AI tool once did the work themselves and plowed through entire files and wrote out decisions themselves (and the model may have been trained in part based on their work).

As we use more AI, that knowledge is in danger of disappearing. Therefore, one of our guiding principles, and that of the European Central Bank, is that AI should not lead to the loss of expert knowledge. Easier said than done. I foresee that we will continue to focus on intelligence of the natural kind in our recruitment for some time to come.

Share article

Comments

Leave a comment

You must be logged in to post a comment.

KENNISPARTNER

Martin Hemmer