Zachary Amos, Editor of ReHack Magazine focuses on the EU AI Act, its requirements and its security implications
As artificial intelligence (AI) has grown, so too have government regulations addressing it. One of the latest and most impactful of these is the European Union’s AI Act.
The EU AI Act passed in March 2024, marking the first comprehensive law governing AI outside of smaller, local codes. Like the General Data Protection Regulation (GDPR) before it, the AI Act covers organisations outside the EU if their systems operate within Europe or use information on European citizens. Consequently, tech companies in any nation should pay attention to it.
What is the EU AI Act?
The EU AI Act aims to keep the development and usage of AI solutions safe and trustworthy. It approaches this goal by categorising AI into four danger levels — unacceptable, high, limited and minimal risk.
Most of the legislation focuses on high-risk AI, which includes anything serving as a safety component or relying on sensitive data. Biometric systems, critical infrastructure AI, recruitment tools and health care analytics solutions all fall under that umbrella. Any AI product meeting the “high-risk” criteria must comply with several requirements to prevent cybersecurity breaches or unjust outcomes from their regular operation.
As of August 2024, the AI Act is now in force. Any organisation building, running or distributing AI solutions should take note of its requirements to avoid fines.
Cybersecurity requirements of the EU AI Act
The EU AI Act is not strictly a security law, but many of its requirements address model safety. Businesses hoping to remain compliant must pay attention to several key cybersecurity areas.
1. Transparency and documentation
One of the most impactful security implications of the AI Act is its demand for transparency. All high-risk systems must provide technical documentation demonstrating their compliance. In addition, AI models need automatic incident logging, measures to prove their reliability and information detailing how to deploy them safely.
Such measures align with what many organisations already want from the technology. As many as 73% of businesses today are less likely to trust AI if they cannot understand its decision-making process. Compliance with EU transparency standards ensures this interpretability.
2. Risk identification and management
The Act also requires companies developing and using AI products to implement a risk management system. That means identifying what vulnerabilities and dangers the solution may entail throughout its life cycle and taking steps to minimise and mitigate those hazards.
Developers must test their models to quantify these risks, but they cannot stop there. The regulation also mandates evaluating the possibility of other vulnerabilities arising in the future based on trends from similar applications.
3. Data privacy and governance
Training data must also meet several privacy and reliability standards. Much of the text here centres on ensuring training and validation datasets are accurate, complete and free of bias. Considering how businesses spend 10%-30% of their revenue on data quality issues, such action will likely only benefit companies in the long run.
Privacy plays a role in compliance here, too. AI models must reduce identifiers where applicable and employ measures like pseudonymisation and access restrictions to prevent breaches of personal information.
4. Backups and fail-safes
The EU AI Act acknowledges that no solution is perfect, even if it follows all the previous security and reliability steps. Consequently, the law includes language necessitating backup measures and fail-safes to minimise the impact of errors or security incidents.
While the Act does not mention specific tools to use to meet this end, it does say the product needs technical redundancy. What that looks like will vary between organisations, but data backups, data centre redundancy and emergency response plans are common ways to ensure resilience.
5. Ongoing monitoring
Similarly, AI applications under this law require ongoing monitoring. Human experts must verify their reliability before launch and provide guidance for proper oversight during their usage.
These verification measures must follow specific guidelines written and adopted by each company under the Act. They must also dictate that at least two experts must verify any AI-driven identification before users can take any action based on it.
AI regulations have heavy security implications
AI and cybersecurity are inseparable. As a result, any laws governing the development and implementation of intelligent systems naturally impact security considerations and workflows. Professionals in this field must recognise the effect the EU AI Act has on their work and adapt accordingly to remain compliant.
Author: Zachary Amos, Editor, ReHack Magazine
There’s plenty of other editorial on our sister site, Electronic Specifier! Or you can always join in the conversation by commenting below or visiting our LinkedIn page.