During the past Risk & Compliance Annual Conference, the role of artificial intelligence (AI) in the compliance sector was a topic of conversation that recurred frequently. Jolanda ter Maten, expert in digital innovation and author of the book "From Buzz to Bizz: Your strategic guide in a complex world of emerging technologies," spoke to us about how the discussion around AI is deepening. No longer is it just about tools, but increasingly about fundamental questions about ethics, organizational culture and strategy.
Whereas organizations often look first to the latest AI tools when it comes to new technologies, Ter Maten sees a shift. The focus is no longer exclusively on the "what," but on the "why" and "how. AI is increasingly seen as a compass: it can provide direction, but humans remain responsible for the course and final destination.
According to Ter Maten, it is essential that organizations start not with the question of which tool they want to use, but with the question of why they want to use AI in the first place. What problems are they trying to solve? This reversal in thinking, she says, is crucial for successful and ethical application of AI.
Many organizations try to apply AI to existing processes, hoping to make them more efficient. But that, according to Ter Maten, is a missed opportunity. Technologies such as AI, cloud and blockchain offer the opportunity to fundamentally redesign processes. That requires breaking free of old assumptions and structures, and requires reflection on the underlying purpose of processes. "You cannot simply sprinkle existing processes with AI; it requires fundamentally different choices and letting go of old frames of mind."
Therefore, the urgency of AI lies not in the benefits it brings immediately, but in the risks of being left behind. Those who move too late run the risk of becoming irrelevant, as has previously happened to companies that did not make digital transitions in time.
Successful deployment of AI requires more than technological know-how. Governance is key - the framework of agreements on data use, responsibilities and collaboration. Ter Maten emphasizes that AI changes not only processes, but also the underlying roles and structures within organizations. This requires revising who has access to what data, which parties are involved and how accountability is established.
In addition, AI can also generate new types of data, namely data that were not in the picture beforehand but do have policy implications. Without clear governance, risks to compliance, privacy and ethics arise. AI is therefore not an ICT project, but a strategic organizational issue.
One of the biggest risks in applying AI in compliance is bias in the underlying data. If an AI system is trained on existing data that contains biases - conscious or unconscious - these can be reproduced and even amplified. This can lead to outcomes that are discriminatory or not in line with laws and regulations.
Ter Maten cites a well-known example from the medical field: an AI model was able to use retinal images not only to predict diabetes - something doctors could already do - but also, in addition, to infer a patient's gender, something human researchers could not extract from those images. This illustrates how AI can recognize patterns and correlations that are invisible to humans. Sometimes this leads to undesirable outcomes or false positives, but it can also actually provide new insights and scientific breakthroughs.
Precisely because AI "sees" differently than humans, it is important to remain critical of outcomes: are they relevant, desirable and legally permissible? AI trained on existing, potentially biased data may inadvertently perpetuate discrimination. Moreover, AI systems can still trace individuals based on innocuous or unidentifiable data by making connections between data sets. This also compromises AVG compliance.
She states, "Technology has come a long way, but we as humans are not there yet. We remain the weakest link." Collaboration between compliance officers and AI officers thus becomes crucial. Legal knowledge and technological knowledge must go hand in hand.
AI literacy - the ability to understand AI without being highly technical savvy yourself - is essential for organizations that want to work with AI. That means understanding how AI handles data, where bias can arise, and the implications for decision making.
According to Ter Maten, organizations are no longer just looking for quick fixes or basic courses in prompt engineering. They want to know how AI really works, what the risks are and how they can remain agile as an organization. This also requires attention to diversity in perspectives, to avoid blind spots in data and models.
One particular concern Ter Maten mentions is the so-called "Chinese whisper effect." AI systems that communicate with each other without human intervention can inadvertently distort information. Human control and monitoring thus remain necessary even as AI becomes increasingly autonomous.
Although legislation such as the NIS2 Directive and the EU AI Act are guiding principles, policy often lags behind the rapid developments in AI. Still, Ter Maten sees in the AI Act starting points for organizations to get practical. A first step is to increase AI literacy, something the EU now requires.
Within governments and public institutions there is a growing need for concrete tools for shaping AI policy. Ter Maten notices this daily in her work with diverse organizations, from municipalities and educational institutions to private parties such as KPN, BDO and insurers. She guides sessions for, among others, the Blaricum City Council, secondment agencies such as Flexwise, compliance officers, lawyers and government agencies such as the Ministry of Defense and the Tax Authority.
The question participants enter with is often practical: "How do I make a good prompt in ChatGPT?" But gradually the conversation shifts to more fundamental topics. How do we handle data as a municipality or institution? What will it mean if we let AI analyze policy and citizens will soon do the same?
Parties such as the Council of State and the VNG are also interested in exploring these questions further. Ter Maten talks to them about how AI policy should not remain abstract or legal, but rather practical and understandable. "No thick reports full of vague governance language, but practical insights that are directly applicable in daily policy practice."