AI literacy in the organization: from strategy to practice
As of Feb. 2, 2025, organizations must ensure that their employees are AI literate. This is an obligation from the AI Act. The law does not describe the exact measures to be taken. The level of knowledge depends on how AI is applied within the organization. In this article, compliance specialist DMCC, which specializes in privacy and customer contact, provides guidance on how to get started.
8 May 2025
AI Act
The AI Act (AI Regulation) is a European law that aims to promote innovation while managing the risks posed by AI. In doing so, special attention is drawn to fundamental rights and values of citizens within the European Union, including privacy. In accordance with Article 4 of the AI Act, organizations must ensure a sufficient level of knowledge of employees in the field of AI. The level of knowledge must be appropriate to the work of employee. This means that the greater the risks and impact of the AI application, the higher the level of employees' skills, knowledge and understanding should be.
AI applications mapping
AI literacy is clearly not a one-size-fits-all model. Each organization must assess what level of AI literacy is appropriate. In practice, it starts with mapping the AI applications within the organization. In what ways is AI being deployed and does the organization want to deploy AI in the near future? Then, for each application, the level of risk and the effect of the application on the people involved should be determined.
4 levels of risk
The AI Act assumes four levels of risk. Many organizations use applications with no or minimal risk to the security and fundamental rights of the individuals involved. Examples include spam filters or games. Applications with limited risk are also common. An example is a chatbot. The third category are high-risk systems. These are applications that have greater impact on the individuals involved. For example, an AI system that automatically selects resumes for the next round in a hiring process. Software that determines whether or not someone will receive a loan also falls into this category. In such cases, bias, exclusion or violation of privacy rights can pose real risks. Finally, there are systems with unacceptable risk. An example is "social scoring" based on certain social behavior on personal characteristics. The latter type of AI is prohibited.
AI literacy
Which category the AI system falls into determines which obligations from the AI Act apply. When it comes to AI literacy, high-risk applications require more skills, knowledge and understanding than lower-risk applications. In addition to risk level, the context of the AI application, role of the specific employee and available resources of the organization are also important. Larger organizations are likely to have more resources available than smaller organizations.
AI literacy is not only about the technical operation, but also about the social, ethical and practical aspects. The employee working with the system must understand how to interpret the output of the AI application and how it affects the individuals involved. Finally, periodic evaluation is essential. Developments are rapid, especially in the field of AI. This makes AI literacy not a one-time exercise but an ongoing process.
In practice
Start by making the topic of AI discussable within the organization. Organize a training or e-learning or discuss the topic during an employee meeting. Create awareness by posting messages on the internal news channel and put the topic on the agenda of team meetings. Involve employees from different teams and initiate a conversation that involves thinking about how AI is used and the potential impact on those involved. To ensure that even with future applications the risks are considered in a timely manner, a working group of AI ambassadors can be set up. They can identify bottlenecks, collect questions and promote AI literacy within their own teams.