The end of the negotiation process surrounding the European Union's AI Act is nearing. The new regulation aims to strengthen the regulation of AI systems as a whole based on a risk-based approach. Yet certain cybersecurity aspects of AI systems do not seem to receive much attention both in the content of the bill and in institutional discussions. This is particularly true of the security of the components on which AI systems are built, such as AI models. This issue is explored by Giacomo Delinavelli in this blog.

As the EU AI Act slowly but surely moves through the trilogue process, there is still an underexposed aspect of AI systems, namely their impact on cybersecurity. In the most recent Threat Landscape 2023 (1), published in October, the European Cybersecurity Agency (ENISA) concluded that AI systems may pose certain risks related to user authentication or the authenticity of content or information. Moreover, AI systems that directly affect people, such as a system that manages the supply of electricity power or a company's work schedules, can be manipulated, poisoned (2) or unfairly influenced by bias-induced biases.
According to the agency, "all of these threats can be linked to multiple vulnerabilities, such as lack of training due to targeted attacks, poor control over what information is retrieved by the model, lack of sufficient data to resist poisoning, poor management of access rights, use of vulnerable components and lack of integration with a broader cyber resilience strategy."
Assessing the associated cybersecurity risks of an AI system is not only a best practice, but also a legal obligation provided for in existing and upcoming legislation, such as the Machinery Regulation (3), the AI Act proposal, the Cybersecurity Resilience Act proposal, and, for certain sectors, the NIS2 Directive.
According to a recent JRC study (4), certain aspects in particular should be kept in mind:
1. The focus of the AI Act is on AI systems.
The structure of AI systems includes several internal components, some of which have a connection to AI, while others do not. Although AI models are essential components of AI systems, they do not constitute AI systems by themselves. The cybersecurity requirement established in the AI Act applies to the AI system as a whole and not directly to the internal components.
2. Compliance with the AI Act necessarily requires a security risk assessment.
To ensure that a high-risk AI system complies with the AI Act, Article 9 provides for the implementation of a risk management system. According to Article 15, this requirement includes a cybersecurity risk assessment of the system and its components. Risk assessments should especially consider limitations and vulnerabilities of AI models in the context of their interaction with other non-AI components of the system. This risk-based approach is crucial to determining the details of a cybersecurity solution for individual AI systems. This is consistent with established cybersecurity practice, where risk assessments already play an important role, particularly in the most widely used information security standards of the ISO 27000 series (ISO/IEC JTC 1/SC 27 2022).
3. Securing AI systems requires an integrated and consistent approach with established methods and AI-specific controls.
This process should leverage current cybersecurity practices and procedures, combining existing controls for software systems with measures specific to AI models. AI systems consist of the sum of all their components and their interactions. A holistic approach that follows the principles of security-in-depth and security-by-default must be used to ensure that AI systems meet the cybersecurity requirements of the AI Act.
4. There are limitations in the state of the art for securing AI models.
In today's AI market, there is a wide range of AI technologies with varying levels of maturity. Not all AI technologies are ready for use in AI systems designed for deployment in high-risk scenarios unless their cybersecurity limitations are adequately addressed. In some cases, particularly for emerging AI technologies, there are inherent limitations that cannot be addressed exclusively at the AI model level. In such cases, compliance with the cybersecurity requirements of the AI Act can only be achieved through the holistic approach described earlier.
All in all, it is crucial to deploy AI responsibly and securely. Organizations must ensure that their AI systems are transparent, explainable and auditable. They must also provide sufficient training for their employees to identify and mitigate cybersecurity risks associated with AI systems. A comprehensive security risk assessment for an AI system requires attention to the entire life cycle of system development and implementation. Too strong a focus on securing machine learning models through academic adversarial machine learning oversimplifies the problem in practice. This means that to truly secure the AI model, we must consider the security of the entire supply chain and management of AI systems.
https://www.enisa.europa.eu/publications/enisa-threat-landscape-2023;
https://www.technologyreview.com/2023/10/23/1082189/data-poisoning-artists-fight-generative-ai/amp/;
The cybersecurity risk is recognized in Recital 25 Machinery Regulation, which requires "manufacturers to adopt proportionate measures for the protection of the safety of the product within the scope of this Regulation notwithstanding, the application of other Union legal acts specifically addressing cybersecurity aspects."
https://publications.jrc.ec.europa.eu/repository/handle/JRC134461.
