At Solsta’s Edge AI seminar, held in London on the 26th June, conversations centered around the challenges involved in integrating Edge AI into devices, as attendees sought to understand how they could make sense of this emerging technology for their particular application.
Edge AI is a relatively new technology, where data is processed on the ‘edge’ of devices as opposed to in the Cloud. Doing so increases bandwidth, reduces latency, and ensures security of the data being processed. AI can intelligently sort through large amounts of data and identify patterns.
Consequently, Edge AI is particularly attractive in applications such as object identification – not just being able to pick out a car from a person, but being able to call the authorities if a car crashes, for instance – autonomous vehicles, and industrial monitoring, to name a few.
A series of five presentations held over the course of the day covered topics ranging from simplifying and maintaining AI models on the Edge, a presentation delivered by Raul Vergara, CRO of Thistle Technologies; to the era of on-device AI, broached by Matthias Golke from DeepX; and even CRA-driven security, delivered by Hector Tejero, Founder of IEES Ltd.
Best practices for securing AI on the Edge
In his presentation, Vergara shared why security was such an important part of deploying AI models on the Edge, the challenge of securing devices, and the best practices for doing so.
When considering whether to deploy AI at the Edge in the first place, Vergara said, using the simple ‘BLERP’ rule helped: bandwidth, latency, energy, resilience and privacy. By asking these questions, a company can have a greater understanding of whether Edge AI is something they need.
For example, “do I want to send all of the data to the Cloud [all] of the time? If you’re doing vibration sensing, that’s a lot of data … so do I want to send it to the Cloud? No,” explained Vergara. “Latency: can I wait? Do I have a robot at a manufacturing plant that is doing certain things? Do I want to wait for it to go to the Cloud?”
Security comes into the conversation by way of there being 18 million connected devices, forecast to reach 40 billion by 2030, which vastly increases the attack surface and point of entry for threat actors.
Different products like microcontrollers or microprocessors from different vendors adds complexity, and securing devices can become “unmanageable”, said Vergara.
Some of the key threats facing these devices span IP theft, firmware tampering, over-the-air hijacking, and credential extraction.
Therefore, implementing core security pillars like prevent versus notify, secure boot, secure OTA, hardware root of trust, and secure data at storage can go a long way in improving a company’s cybersecurity posture.
“On the notify side, I’m going to apply a model. It’s going to do anomaly detection and look if there’s anything that looks weird [on the] device, and I’ll tell you if it was hacked,” said Vergara. “That … for me is an after the fact, and it’s a way of solving the problem. It’s easier because I didn’t have to deal with the microcontroller architecture and the security features.
“But I’m not securing the device … that’s why … I’d rather prevent it from happening.”
Sharing best practices for secure boot and secura OTA, Vergara advised the audience that companies should only secure the devices they can maintain, and rather than performing full firmware updates, to only update the AI model.
By leveraging these best practices, addressing security as a built-in feature rather than “bolted one”, using secure boot and secure OTA as a baseline, embracing a platform to maintain device security, and regard it as a competitive advantage, companies are positioning themselves well to handle potential incoming threats.
“I think security is becoming a differentiator now,” Vergara concluded.
There’s plenty of other editorial on our sister site, Electronic Specifier! Or you can always join in the conversation by visiting our LinkedIn page.