Menu

Filter by
content
PONT Data&Privacy

0

The AI Act is now in effect: does your organization already comply?

Since February 2025, AI literacy has been mandatory under Article 4 of the European AI Regulation. In other words, your employees must be able to understand, responsibly apply and control artificial intelligence (AI). Do you have the basics in place?

July 31, 2025

Blog

Blog

From the Berenschot AI Trend Survey 2025 shows that 90% of the 500 respondents expect the use of AI in work processes to only increase in the coming years. The need for organizations to invest in digital security and minimize the risks of AI is thus growing.

It is notable, however, that most have not yet developed policies to make employees sufficiently aware of, for example, the operation, ethical implications and high-risk factors of the AI systems they use and/or provide. This while maintaining awareness and maintaining the level of knowledge - AI literacy - has been mandatory under Article 4 of the AI Regulation since February 2025.

Different for each organization

AI literacy refers to the awareness, knowledge and skills necessary to understand, responsibly apply and control artificial intelligence (AI). Consider basic knowledge of concepts such as algorithms, neural networks, language models and data collection. Or practical skills including being able to critically reflect on AI output, recognize risks such as bias and know what human interventions are needed in an AI system.

Point being, AI literacy can mean something different for each organization. In fact, AI literacy depends largely on context. For example, the AI Act touches users, developers, those ultimately responsible within the organization, as well as stakeholders of AI systems. Nevertheless, every European (and Dutch) organization is obliged to take responsibility for shaping appropriate policies, training and education to bring the AI literacy of employees up to standard.

Concrete example

Where and how your organization should work on AI literacy depends, for example, on the context in which employees use AI systems, how often they do so and by whom. The pre-existing knowledge and skills of employees and their role within the organization also play a role.

Specifically, for language models such as ChatGPT, this means that users understand how such systems generate language, the role of training data in this, and why transparency and traceability of output are often limited. Understanding the limitations of AI is also part of AI literacy, as in distinguishing fact from fiction. When employees work with more sophisticated and complex AI systems, such as for fraud detection, they should also be at a much higher level of AI literacy.

What steps should you take now?

The purpose of Article 4 of the AI Regulation is that each organization determines what is an appropriate level of knowledge and ensures that all employees meet that level. The Personal Data Authority, coordinating algorithm supervisor in the Netherlands, therefore advises organizations to start working with AI literacy immediately to reduce risks. In doing so, measures around AI literacy should be structural in nature. One-time interventions or learning interventions are a good starting point, but insufficient in the long run.

When will you have the basics in place? As soon as it is clear:

  • What AI systems your organization uses,

  • what knowledge and skills are needed to do so, and

  • How to train your employees to the desired level of AI literacy.

After that, it is important to continuously monitor, improve and also reevaluate this process. So: are your employees sufficiently up to date with the latest developments around AI and does the training program still fit the goals of your organization?

Read here the original version of the article on the Berenschot website

Share article

Comments

Leave a comment

You must be logged in to post a comment.

KENNISPARTNER

Martin Hemmer