As the embedded systems industry converged on Nuremberg for embedded world 2025, conversations with exhibitors reflected the permeation of AI into hardware and devices.
MIPS, who officially relaunched in 2024, said they saw physical AI being the future and a significant part of their strategy. With the announcement of their Atlas portfolio, divided into ‘sense’, ‘think’ and ‘act’, the company was bringing its expertise in compute systems into AI.
While the mention of AI may summon grandiose images of humanoid robots able to act and operate as humans do, the application of AI doesn’t have to be complex, said Sameer Wasson, CEO, and although it may be used for humanoid robots, it may also be used for the more mundane, everyday tasks like unloading and reloading a dishwasher.
“So far, the boom on AI has been in the inference space, in training and in the data centre,” said Wasson. “But if you think about it, the actual implementation [is] of AI in our day-to-day life.”
The new Atlas portfolio, which is made up of the I8600 (the ‘sense’ part), the S8200 (‘think’) and the M8500 (‘act’), aims to provide a “map” to customers who want to enable physical AI, added James Prior, Marketing Executive.
“Sensing is understanding the world around the robot (for example), taking in the different types of inputs … thinking is the AI model, taking that data and trying to decide what to do … then the third part is acting, and that’s the implementation to reality,” explained Prior.
AI in antenna design
AI in the form of machine learning (ML) is an integrated feature in Ignion’s Oxion platform, which was formally launched at embedded world 2024, and the newest version was showcased at this year’s event.
The Oxion platform was brought out by Ignion in recognition of the complexity that faces developers in designing with antennas, as part of a concerted effort from the company to ensure antennas are integrated faster and easier, speeding up the design process and time-to-market.
“AI is a cornerstone of everything we do, and machine learning was one of the main features we developed for the platform because we believe that providing this level of flexibility and speeding up that process is essential,” said Aitor Moreno, Product Manager.
Notably, the newest version has an AI assistant, ‘Max’, which provides customer support to those using the platform.
“One of the pillars when it comes to the platform is to provide a space where we share what we know to our users,” added Moreno, “the assistant is helping in that sense and supporting the things you face … you may have a lot of questions.”
Newly launched Edge AI platform
Arm were present at the show to talk all about their newly launched Edge AI platform, the Armv9, which supports OEMs in meeting growing computing demands and the flexibility to execute AI workloads. The platform features the Arm Cortex-A320 CPU and the Arm Ethos-U85 NPU.
According to Paul Williamson, SVP and GM of the IoT division, the company continuously looks five years into the future to understand what technologies will grow in adoption, which can vary depending on the industry vertical – in the consumer market, Williamson said, the pace of adoption can be quicker at three to four years; but in automotive the timescale is longer.
On the question of how Arm are able to look this far ahead, Williamson said: “We’re in a fantastic and somewhat unique position in supporting partners who are developing applications across the world. But we don’t just talk to the companies who are producing silicon. We also talk to the OEMs and the end consumers of the technology to understand what they are trying to achieve.
In other words, through these conversations with their partners, OEMs and end users as well as examining longer term trends in compute, Arm are able to sketch together what technologies will be leading the way five years from now.
There were three key points Williamson summarised to me that are important about this new platform: it delivers a step change in AI performance, approximately 10x that of previous generations; moving from a microcontroller architecture to a microprocessor architecture means it can be coupled with larger memory models capable of handling larger models; and it ensures stringent security.
Not only will the adoption of AI grow, but the ways in which people are using it will expand, demonstrating the industry’s continued ingenuity to solve real-world problems. Williamson noted one trend he saw being demand for one device to run multiple different models, or have multiple devices running different models. Arm had a demonstration of this in action by using a Raspberry Pi device able to detect objects using the YOLOv11x model and generating a story using the TinyStories SLM.
“We may have anticipated the increase in matrix-based compute, but there’s no way five years ago I would have said we’re going to run two independent models on device, doing two different things and collaborating,” said Williamson.
There’s plenty of other editorial on our sister site, Electronic Specifier! Or you can always join in the conversation by commenting below or visiting our LinkedIn page.