Dr. Chris Hillman, Senior Director, AI/ML and David Jerrin, Senior Director at Teradata stress the importance of data management and AI models in meeting the EU AI Act
Last month we saw the first EU AI Act deadline coming into effect, marking a sizable shift in how AI systems are regulated across the EU. The Act, which has officially rolled out in August 2024, aims to ensure that AI systems are safe, transparent, and respect fundamental rights.
For businesses, this means a widespread overhaul of their AI approach, with a particular focus on data management and AI models. But what does this really mean in practice and how can they make sure their data and AI models will comply to the Act ahead of the next big deadline this August?
Compliance within data and AI models
AI systems process data, make predictions, and generate insights that inform decision-making and drive innovation. With the EU AI Act, the development and management of data and any AI model come under intense scrutiny due to the huge amounts of information they require.
High-quality data and AI models are essential for developing reliable AI systems. This means data and models should be free from biases, accurate and robust, minimising errors that could lead to unfair or harmful outcomes. This involves rigorous data cleaning and validation as well as model testing and continuous monitoring to maintain the model’s performance.
When it comes to data management, companies should guarantee integrity by using data engineering processes to audit, balance and control capabilities. Reusing trusted data in the form of an enterprise feature store reduces the temptation to reinvent the wheel when it comes to feature engineering, and limits the ongoing quality and compliance overhead. While for AI models, a strong ModelOps tool must be used so that models can be tracked and assessed regularly.
Additionally, the EU AI Act mandates that AI systems must be transparent and explainable. This means companies need to document their data sources, processing methods, and the decision-making logic of their AI systems. They should also replicate this with model development processes to include training methods.
Effective model governance frameworks are also necessary for compliance. Companies must create policies and procedures for data collection, storage, processing and sharing. They should also ensure such policies cover model development, deployment and monitoring. This includes employing impenetrable security measures to safeguard models from illicit interfering and ensuring compliance with data protection regulations like the General Data Protection Regulation (GDPR).
Planning ahead of the next deadline
Given the critical role of data and AI models, there are several actionable steps companies can take now to prepare for the next EU AI Act deadline.
Firstly, it’s critical to conduct data and model audits. When it comes to the former, companies should look at evaluating the quality, sources, and usage of data. Make sure that data collection practices go hand-in-hand with GDPR and other relevant regulations. Similarly, model assessments should also be in place to test for accuracy, trustworthiness, and fairness of AI models.
Additionally, companies should look into adopting robust governance. To achieve this, organisations need to establish a thorough governance framework that includes policies for data and model development, deployment, monitoring, and data security and privacy. Assign stewards to oversee those quality and compliance efforts. It is easier to gain buy-in and compliance if it is thought of as an enabling process as opposed to an audit. Not to forget the importance of transparency and explainability that helps workers understand where the data came from and how it came to this decision. To achieve these, you must develop clear documentation for AI systems, detailing data sources, processing and training methods, and decision-making logic. Use explainable AI techniques to make the models more interpretable.
Regularly auditing AI systems for biases and taking corrective actions as often as needed should now be common practice. Use varied datasets and fairness-enhancing techniques to produce equitable outcomes. Correctly-building and maintaining a feature store will support and speed up any new projects as the datasets used are already proven to be trusted and reliable.
It’s also important to employ strong security measures to shield data from breaches and AI models from unauthorised access. Use encryption, access controls and regular security audits to protect any sensitive information.
And finally, a step that shouldn’t be missed is training. Educating employees about the EU AI Act and its implications is key in bringing them up to speed with the new standards but also what to be aware of.
The EU AI Act represents a crucial step towards ensuring that AI systems are secure, clear, and respectful of fundamental rights. For companies, this means a renewed focus on data management and AI model development. High-quality data, impartial, and transparent AI models are fundamental for compliance and for developing trustworthy AI solutions. By prioritising accuracy, governance, and ethical development, companies will be able to meet regulatory requirements and encourage innovation and trust in their AI offerings.

Chris Hillman is the Senior Director, AI/ML in the International Region and has been responsible for developing and articulating the Teradata Analytics 1-2-3 strategy and supporting the direction and development of ClearScape Analytics. Prior to this current role, Chris led the International Data Science Practice and has worked on a large number of AI projects in the International Region focusing on the generation of measurable ROI from analytics in production at scale using Teradata, open source and other vendor technologies.
There’s plenty of other editorial on our sister site, Electronic Specifier! Or you can always join in the conversation by commenting below or visiting our LinkedIn page.