The Open Data Institute’s (ODI’s) latest white paper, ‘Building a Better Future with Data and AI’ highlights major deficiencies in the UK’s technological infrastructure that jeopardise the anticipated benefits for individuals, society, and the economy from the AI surge. The paper also presents the ODI’s recommendations for fostering diverse and equitable data-centric AI.
According to its findings, the ODI urges the new government to implement five measures to enable the UK to harness the opportunities presented by AI while mitigating potential risks:
- Ensure wide access to high-quality, well-governed data from both public and private sectors to nurture a diverse and competitive AI market
- Enforce data protection and labour rights within the data supply chain
- Empower individuals to have a greater say in the sharing and utilisation of data for AI
- Revise intellectual property regulations to ensure AI models are trained in ways that prioritise stakeholder trust and empowerment
- Enhance transparency regarding the data used to train high-risk AI models
The ODI’s white paper posits that emerging AI technologies hold significant promise for transforming industries such as diagnostics and personalised education. However, substantial challenges and risks accompany widescale adoption, particularly with generative AI’s dependence on a limited number of machine learning datasets lacking robust governance frameworks.
This inadequacy poses significant risks to both the adoption and deployment of AI, as poor data governance can lead to biases and unethical practices, undermining trust and reliability in crucial areas such as healthcare, finance, and public services. These risks are compounded by a lack of transparency, which hampers efforts to address biases, remove harmful content, and ensure compliance with legal standards. To address these issues, the ODI is developing a new ‘AI Data Transparency Index’ to provide clarity on data transparency across different system providers.
“If the UK is to benefit from the extraordinary opportunities presented by AI, the government must look beyond the hype and attend to the fundamentals of a robust data ecosystem built on sound governance and ethical foundations,” said Sir Nigel Shadbolt, Executive Chair and Co-Founder of the ODI. “We must build a trustworthy data infrastructure for AI because the feedstock of high-quality AI is high-quality data. The UK has the opportunity to build better data governance systems for AI that ensure we are best placed to take advantage of technological innovations and create economic and social value whilst guarding against potential risks.”
Before the General Election, Labour’s manifesto outlined plans for a National Data Library to consolidate existing research programmes and enhance data-enabled public services. However, the ODI emphasises that we first need to ensure the data is AI-ready. Besides being accessible and trustworthy, data must meet agreed standards, which necessitate a data assurance and quality assessment framework.
The ODI’s recent research has revealed that, with few exceptions, AI training datasets currently lack robust governance measures throughout the AI life cycle, posing safety, security, trust, and ethical challenges related to data protection and fair labour practices. These issues must be addressed if the government is to fulfil its plans.
Additional insights from the ODI’s research include:
- The public needs protection against the risk of personal data being used illegally to train AI models. Measures must be taken to address the ongoing risks of generative AI models inadvertently leaking personal data through user prompts. Emerging privacy-enhancing technologies have significant potential to safeguard people’s rights and privacy as AI becomes more prevalent
- Key transparency information about data sources, copyright, and inclusion of personal information is rarely included by systems flagged within the Partnership for AI’s AI Incidents Database
- Intellectual property law must be urgently updated to protect the UK’s creative industries from unethical AI model training practices
- Legislation protecting labour rights will be essential to the UK’s AI Safety agenda
- The rising cost of high-quality AI training data excludes potential innovators such as small businesses and academia
There’s plenty of other editorial on our sister site, Electronic Specifier! Or you can always join in the conversation by commenting below or visiting our LinkedIn page.