The advent of Edge AI has largely been helped by the increase in performance of hardware that can now run larger models and perform real-time analytics, addressing constraints associated with Cloud processing such as bandwidth restricts and cost. Umar Ahmed, AI and Robotics Technical Specialist at Advantech spoke to IoT Insider about the evolution of Edge AI following on from a presentation delivered at Advantech’s Edge Computing Summit.
The adoption of AI mirrors that of the Gold Rush in the US in the mid to late nineteenth century; an analogy Ahmed used to describe how companies have recognised the value of adopting AI and rushed to do so.
“In the Gold Rush, the people who made the money were not the people who mined the gold, but those who were selling shovels,” said Ahmed. Following this analogy, those who will benefit the most from the AI era are not the companies adopting it, but the companies manufacturing the chips.
There are key industries that have deployed Edge AI: autonomous cars, robots, agriculture, and industrial manufacturing. Its value is not restricted to just these industries – Ahmed also noted that Netflix and Spotify are using it to better understand how its users are behaving on their platforms, demonstrating how universal Edge AI can be.
Zeroing in on how Edge AI is being integrated into the robotics industry, the aim is for their operations to mimic how humans would operate, and AI adds the capabilities to perceive the world more intelligently. Although LiDar sensors and cameras help with this, AI creates that additional layer of functionality.
For instance, a camera system can allow a robot to recognise an office space, but AI will enable it to perform object detection and distinguish a desk from a chair. This doesn’t have to be just an office space – it can also be in a large-scale factory, where there will be plenty of hazards to navigate.
“If you look at it from the application perspective, Amazon and all these big warehouse companies, it’s very useful,” Ahmed added.
Picking the right architecture
Understanding the use case and end application will fundamentally guide a company towards choosing the right hardware and architecture – which was a key point discussed in the presentation Ahmed partook in at the Edge Computing Summit.
CPUs, GPUs and NPUs all have advantages and drawbacks. CPUs consume less power, Ahmed noted, are widely available, and can run simple AI models – but as a result, are limited to what they can run and won’t be able to do parallel processing and computing.
“If you want to implement deep learning applications on that [CPU] … it will be too slow … so it does not support deep learning.”
GPUs, by way of contrast, are designed to run deep learning applications but as a consequence are power-hungry and generate a lot of heat that has to be addressed.
“The ideal use case for them [GPUs] is computer vision-based applications where you really want to do some real-time [analytics],” said Ahmed.
NPUs, finally, are more optimised hardware with regards to the small form factor and are well suited to embedded applications, but are less flexible.
“The real thing that customers need to understand is there are a lot of variables that they need to get right,” Ahmed stressed. “Selecting the hardware is one variable. It’s only one piece of the puzzle. Even if I make a good decision at this end, I need to be aware of other variables, and one of them is software.”
Some of the questions companies need to be asking themselves about software are:
- What software can the hardware support?
- What operating system can run on it?
- How easy is it to do development on it?
- How easy is it to build AI applications?
The final piece of the puzzle is determining where the hardware will end up; whether this will be in an office, an airport, or a field, because it decides how important power and heat optimisation will be, as these environments have different sensitivities.
Balancing software and hardware
If the balance between software and hardware being optimised to run AI is not right, “it can be a turn off for both sides,” said Ahmed, something the industry has historically struggled with.
For example, an AI algorithm could be developed and promise accuracy, but deploying it on the Edge for the intended use case may not yield the same results, especially if the algorithm has been designed for a special kind of GPU.
“That’s why it’s very important for the vendors … to help with the software development as well,” said Ahmed. “If the two parties are not communicating with each other and working towards … successful deployment … they will run into this mismatch, and it will not be an optimised product.”
Ultimately, the end user won’t have a product that performs well and meets their expectations.
Other challenges facing AI adoption include spending too much focus on building the AI application without taking into account practical implications; setting up the developer environment to be seamless; and the software-centric nature of AI calling for the need of experts who understand how to build AI models.
“These are the challenges and all [of] these when combined together, impact each one of us within our AI journey in some way, because we’ll be very capable in one or two steps, but we may lack expertise, [or] may need help on … other steps,” said Ahmed. “That’s why working together [and] the whole ecosystem can benefit.
“It is a tough journey. It’s not that easy.”
There’s plenty of other editorial on our sister site, Electronic Specifier! Or you can always join in the conversation by visiting our LinkedIn page.