The world is shifting fast, with AI now embedded across industries and operations. Yet more than 80% of AI projects fail, not because the models are flawed, but because the data isn’t ready.
As we enter the agentic era, where AI systems act autonomously, success depends on how well humans and AI systems collaborate. That collaboration starts with data skills and the ability to govern and interpret data with precision and purpose.
How are data and AI skills influencing the success or failure of IoT deployments?
Without the right skills to manage data, even the most advanced IoT systems can fall short. Data and AI capabilities are now central to the success or failure of IoT deployments with the question no longer being ‘what can this model do?’ but ‘how do we structure ourselves, so humans and agents work together safely, productively and at scale?’
Any AI model is only as good as the data it uses. That’s why ensuring access to accurate and unbiased data in sufficient quantities is critical for effective AI model development and deployment. However, data quality isn’t enough alone, and teams need the skills to govern and apply that data in context.
Organisations that embed AI literacy across operations are better equipped to unlock value and adapt as technologies continue to evolve. Those that don’t, risk falling behind, not because of the tools they lack but the skills they haven’t built.
Which roles within IoT organisations most urgently need AI training?
In IoT organisations, the most urgent need for AI training lies with engineers, data scientists, operational leaders and business executives. Engineers managing device networks must adapt to edge AI and real-time analytics, while data scientists require deeper fluency in generative AI to build predictive models that drive automation and efficiency.
Operational teams translating insights into business action also need AI literacy to interpret outputs and guide strategic decisions. And finally, business executives need to be able to understand what AI is doing, why it is being used, and what the risks are. It is often lonely at the top of any organisation, and this is particularly relevant in the AI era, where leaders are at significant risk of not knowing the unknowns that they really need to know.
The rise of generative AI has accelerated job growth and demand across AI Risk and Governance specialists, engineering and data science roles. According to Cisco’s 2025 AI Workforce Consortium report, nearly 80% of tech roles now demand AI fluency, making workforce transformation a critical priority.
As IoT systems become more intelligent and autonomous, organisations must ensure that teams across disciplines understand how to work with intelligent tools. Developing AI expertise across roles is now a fundamental requirement for IoT organisations to remain competitive and future ready.
What specific AI capabilities should IoT professionals be developing to stay relevant?
To stay relevant in today’s evolving IoT landscape, professionals must build both technical AI capabilities and irreplaceable human skills. On the technical side, key competencies include edge AI for real-time processing, anomaly detection and AI driven analytics. These skills enable smarter infrastructure and autonomous decision-making across connected systems.
Equally important are human capabilities that AI cannot replicate. Skills such as judgement, creativity, empathy, ethical reasoning become the new competitive differentiators as AI becomes more embedded in infrastructure.
With 78 per cent of ICT roles now requiring AI skills, demand is rising for professionals who can blend technical expertise with leadership and strategic thinking. IoT professionals who combine technical fluency with human insight will be positioned to lead innovation and adapt to the shifting demands of AI-integrated environments.
How can companies balance automation with human oversight in IoT systems?
As AI increasingly manages connected infrastructure, companies must maintain a careful balance between automation and human oversight in IoT systems. Automation can streamline operations and reduce manual load, but human judgement remains essential for ethical governance and anomaly detection.
Fully autonomous systems can make errors, especially in unpredictable environments, so human oversight acts as a safeguard against unintended outcomes. Employees must be empowered with the skills and authority to interpret AI outputs, challenge assumptions, and guide decisions that align with business values.
What practical steps can IoT-focused businesses take to embed AI literacy across their workforce?
Embedding AI literacy across an IoT-focused workforce requires more than occasional training sessions or a one-off program. Businesses need to move beyond isolated agent deployments and toward coherent operating models that support collaboration between employees and intelligent systems.
This starts with a structured roadmap that clearly defines the strategic role of AI and outlines how teams will engage with automated agents in practice.
Organisations should apply design principles that include governance protocols, agent lifecycle planning, and performance systems that measure both human and AI contributions. Embedding data privacy and ethical safeguards into every phase of deployment is essential to ensure long-term responsible use.
To address capability gaps, companies can invest in practical training, develop internal talent pathways and partner with academic institutions or technology providers to expand across expertise.
When AI literacy becomes part of daily workflows, systems perform more reliable, and organisations are better equipped to scale innovation across connected environments.
By Stuart Harvey, Chief Executive Officer for Datactics
This article originally appeared in the February issue of IoT Insider.
Which roles within IoT