Andrew Pery, AI Evangelist at ABBYY writes about 7 considerations IoT device manufacturers need to keep in mind for the EU AI Act
The EU’s Artificial Intelligence Act (AIA) is the world’s first and most comprehensive AI regulation. Published in July, businesses are currently in a grace period to prepare for compliance with the AIA before enforcement begins in 2025.
The AIA will apply to software companies and manufacturers of products that embed AI either under their trademark or through an authorised representative. Accordingly, it covers IoT devices if they include or rely on AI-driven functionalities, particularly where such functionalities are categorised as ‘high-risk’ within the scope of the AIA.
When does the AIA apply to IoT device manufacturers?
In essence, the EU’s AIA applies to IoT devices where its embedded AI features could impact the health, safety, privacy, or fundamental rights of users.
Many IoT devices embed AI capabilities, whether that’s voice-activated home assistants, industrial predictive maintenance sensors, autonomous systems like drones or robots, or healthcare monitoring devices.
Many of these may be deemed as ‘high-risk’ AI systems, if they fall within specific categories defined in the Act. For example, a smart thermostat that monitors home temperatures might not fall under the ‘high-risk’ category, but an AI-powered industrial sensor monitoring factory equipment or a healthcare device that tracks patient vitals would likely be considered high-risk.
IoT device providers should also be aware of another related EU Regulation, the EU Data Act, which governs how data generated by IoT devices are accessed, shared and used. This regulation mandates manufacturers to grant rights to the data generated by devices to ‘data users’, who in turn acquire rights to share the data with third parties. Furthermore, device manufacturers are prohibited from using the generated data without express permission from data users.
The AIA has rigorous conformance, accountability and transparency obligations. IoT devices that are considered high-risk are subject to compliance requirements before they are placed on the market or offered as a service.
Some of the steps manufacturers of IoT devices will need to take to comply with the AIA include:
1. Thorough risk assessment
Manufacturers will need to assess themselves whether a device falls within the category of a ‘high-risk’ AI system. Based on this risk assessment, manufacturers can prioritise compliance efforts on the highest risk applications, which face the most stringent requirements. An example of this could be an AI system that monitors factory safety or quality control – if it fails, employees’ safety could be put at risk.
2. Quality control
Next, manufacturers need to consider data governance and accuracy, to minimise potential adverse impacts on users. High-risk AI applications require datasets that are representative, free from errors, and statistically relevant. Health related monitoring systems in particular must undergo rigorous data quality testing to ensure the quality and reliability of device outputs.
3. Ensure transparency
Transparency is a crucial consideration, and the AIA mandates that high-risk AI systems explain their functions and limitations. This means providing clear documentation on how systems make decisions. Traceability and logging requirements must be met, including maintaining a record of data processing activities and decision-making processes within AI systems. Manufacturing companies should develop clear user documentation and protocols to ensure operators understand the AI system’s capabilities and limitations.
4. Mitigate cybersecurity issues
The AIA requires manufacturers to secure AI systems against misuse or cybersecurity threats. Companies must assess and mitigate any vulnerabilities in IoT devices and AI models that could lead to unauthorised access, tampering, or data breaches. Regular security audits and real-time monitoring should be established to detect potential threats and maintain resilience against cyber risks.
5. Integrate human overrides
Manufacturers should integrate procedures that allow humans to monitor and override AI-powered decisions, especially those involving worker safety, machinery control, or quality assurance.
6. Align with existing regulations
Device manufacturers need to align with General Data Protection Regulation (GDPR) and AIA standards for data privacy and protection. For IoT-enabled AI, this means ensuring data minimisation, anonymisation, and ensuring lawful data processing. Implementing secure data-sharing protocols, especially when sharing IoT data across departments or with third-party providers, is necessary for privacy and compliance.
7. Implement incident reporting procedures
Finally, device manufacturers must implement procedures for incident reporting if an AI system malfunctions or results in adverse effects, especially for high-risk applications. Incident reporting processes should be established to notify relevant authorities of any significant breaches, incidents, or violations involving AI systems.
It is very important for manufacturers to prioritise compliance from the start of next year to meet compliance requirements and improve the reliability, safety, and ethical integrity of their IoT and AI systems. Compliance not only strengthens customer trust and brand reputation, but also positions companies to be ready for future regulations.
Based on the ‘Brussels Effect’ the AIA could become a template for global adoption by other jurisdictions, much like the EU’s GDPR. If IoT device manufacturers can comply now, they are likely to have done the groundwork for other regulations that may come in future.
By building safer, more ethical systems now, manufacturers can minimise biases and protect users, ensuring long-term benefits through innovation and transparency.
Andrew Pery is an AI Ethics Evangelist at intelligent automation company ABBYY. His expertise is in artificial intelligence (AI) technologies, application software, data privacy and AI ethics. He has written and presented several papers on the ethical use of AI and is currently co-authoring a book for the American Bar Association.
Author: Andrew Pery, AI Evangelist, ABBYY
There’s plenty of other editorial on our sister site, Electronic Specifier! Or you can always join in the conversation by commenting below or visiting our LinkedIn page.