We spoke to Linda Terlouw, instructor of Day 3 of the AI Compliance Officer in Business course. Terlouw combines a background in both computer science and business information technology and has extensive experience as a consultant, IT architect and specialist in AI and data science. We asked her about key themes in the course, common misconceptions and how organizations can already prepare for the AI Act. She also looks ahead: what will the field of AI and compliance look like in 2030?
I cover the AI Act and the technical background of AI. Obviously it is important that AI Compliance Officers know the legal framework, but that alone is insufficient to determine whether an organization's AI system is actually compliant. You also need to speak the language of the techies to a sufficient degree to really understand what is happening and whether "the paper reality" matches practice. I will give an overview of different types of AI systems with many examples in Day 3. We want to train the AI Compliance Officer to be someone who speaks the language of both the lawyer and the techie.
Another topic I will address is the security of AI systems. Having proper security is a requirement in the AI Act for high-risk systems, but is obviously also important for AI systems outside the scope of the AI Act. AI systems are vulnerable not only to traditional IT attacks, but also to specific, new forms of attack. While we don't really go into technical depth in terms of cybersecurity in this course, we do cover an overview of what new types of attacks are possible and how they roughly work.
The biggest misconception I see in practice is that because of the AI Act, "nothing should be allowed anymore" or that you have to "do everything" for all AI systems. Fortunately, this is not the case. In fact, the AI Act has a risk-based approach, with some types of AI systems prohibited because of unacceptable risk and other types of AI systems subject to additional requirements because of high-risk. In addition, the AI Act has some transparency obligations, where it must be made clear that you are interacting with an AI system or that certain digital material, such as videos, was created by AI. However, the majority of AI systems do not fall into these categories, so you still have a lot of freedom in the EU to develop and use AI systems.
At this time, the entire AI Act does not yet apply. Since the prohibitions do already apply, it is obviously wise, if this has not already been done, to check as soon as possible that there are no AI systems in the organization that fall under these prohibitions. The obligations for high-risk systems go into effect in 2026, but it is good to already work on the obligations for them if you have such systems in the organization. So start working now to set up human oversight, logging, a risk management system, etc. In doing so, I recommend selecting one concrete use case and working through all the requirements in a multidisciplinary team.
I find this very difficult to predict. The world of AI is moving so fast right now that it's hard to keep up. Until a few years back, AI was a technology that only experts were concerned with, but with the rise of ChatGPT and similar models, just about everyone is working with AI these days, from office policy officers to elementary school students. This has created huge social issues, such as how to deal with students using AI for coursework, how to stop the spread of disinformation generated by AI, and how to prevent vulnerable people from relying too much on AI as a therapist or doctor. It is a huge challenge, on the one hand, to encourage the development of AI within Europe so that we do not fall (even) further behind the U.S. and China and, on the other hand, also properly mitigate the risks. We need good Compliance Officers who are capable of not only selling 'no', but also of thinking along with us on how to steer innovation in the right direction. Hopefully this training will contribute to that!