Collaboration is key to growing Edge AI development, writes Nebu Philips, Senior Director, Technical Product Marketing, Synaptics
Edge AI application developers aim for a sweet spot for processing performance, but there is a lack of suitable silicon in that performance range. Appropriate development kits are limited, and with the market still maturing, customers can be wary of vendor lock-in. Supporting the adoption of open-source technologies is essential. If IoT vendors worked together to address these challenges, then more customers could benefit from access to advanced Edge AI technology, resulting in a stronger, more scalable Edge AI ecosystem.
That is where co-opetition, rather than competition, comes in. Co-opetition is a model where competitors still vie for customers but work together to expand the overall market. The Edge AI market is set to expand, but market growth can be significantly accelerated if players across the IoT ecosystem align on several measures.
Group problems, group solutions
In just a few short years, it has become possible to run remarkably compact AI models on low-power processors, often paired with highly efficient AI accelerators, with minimal power consumption, while achieving increasingly sophisticated results. This rapid technological progress has made Edge AI more versatile, more cost-effective, and more widely applicable than ever before.
However, progress has outpaced perception. What’s now feasible often exceeds what customers believe is possible. Many still assume Edge AI is too complex, too costly, or too power hungry for their needs, when in fact, it can be simpler and more accessible than they imagine.
We are all always working to educate the market about the possibilities of Edge AI; that part will come naturally. The tougher challenge is getting agreement among the suppliers in the market. Silicon vendors are struggling to keep up with the needs of developers, and there is an industry-wide lack of scalable silicon portfolios designed for today’s Edge AI applications.
From a processing perspective, One way to express AI inferencing capability is to express it in tera-operations per second (TOPS). On one end of the spectrum, there are existing solutions that provide single-digit TOPS, and on the other, solutions in the high teens and greater are available. But between these extremes, there is a gap, and the sweet spot for AI acceleration for IoT devices is in that gap. These tend to be vision-based applications.
This gap presents an opportunity for silicon vendors to deliver efficient, cost-effective solutions in this underserved middle ground.
Tools and more
One of the most overlooked challenges in Edge AI is the scarcity of development kits for Edge IoT and Edge AI. As a result, developers often default to using a platform that is almost certainly overpowered. This is not unreasonable. The first thing that application developers experimenting with Edge AI are likely to want is some assurance that their applications canwork. It is therefore natural to want to choose a sufficiently powerful platform, preferably one that is supported by a familiar development kit.
The problem? The resulting prototype will be optimised for a platform that is overpowered for the application, which ultimately means it will be overpriced too. That AI solution almost certainly will not scale down to run on a less powerful but perfectly capable silicon platform that is more cost-effective for an Edge IoT application. It is important to use a dev kit that will make it possible to produce a design that can be moved directly into production to create a commercial solution.
In the worst case, the developer might simply quit there, not realising that an economically viable solution is eminently possible. But let’s posit a customer determined enough to discover that a good solution exists. Even in the best case, the outcome is that they will have wasted time and money developing an AI application that doesn’t scale down.
What the market needs is clear: readily available, easy-to-use tools that allow developers to port their solutions to silicon that is appropriate for their applications.
Finally, there is a lack of clarity on software development environments when integrating AI functions into application workflows. Should an Edge AI developer choose Linux or RTOS? C++ or Python? It is possible that a developer will create an application and never revisit it – never upgrade it, never expand the portfolio with similar products, never produce a next-generation part. In that case, there are few ramifications for any choice. But how likely is that? Any developer with serious ambitions is likely to expand their product line, and over time, release upgrades and launch new generations of products.
Given the extraordinary pace of innovation in AI and machine learning (ML), it is reasonable for developers to be concerned they’ll end up locked into an environment that does not keep up with changes in the market.
That’s why, at Synaptics, we believe an open-source approach will always be better for developers of IoT and Edge AI applications. Unfortunately, the industry lacks a truly rich, open-source environment for development. This gap needs attention, and there needs to be a shared responsibility within the industry to make it happen.
The AI technology stack is already complex. Each layer brings in multiple solution providers who can provide access to curated datasets or model creation. There are optimisation companies, different toolchain providers, and companies with different strategies for device lifecycle management for AI-enabled applications. This is a lot for a prospective Edge AI application developer to navigate. Perhaps now is the time to provide resources that will support those entering this fast-moving space.
The opportunity
The market for AI semiconductors is on track to be worth $159 billion by the end of 2028, according to Gartner. This figure encompasses all semiconductors for all AI applications, but Gartner is projecting the compound annual growth rate in the market will reach 24% with a significant boost from AI-based applications moving out of data centers into PCs, smartphones Edge and endpoint devices.
The use of AI was already on an upward trajectory, but now that AI has rapidly become integral to Internet search, the demand across industries is likely to increase even faster. Adding Edge AI is clearly the way to add value to consumer electronics.
Other industries are embracing Edge AI as a means to enhance operational efficiency and reduce latency by processing data closer to its source. Edge AI is being applied in healthcare, telecommunications, and agriculture, as well as in robotics, industrial automation, and other manufacturing systems.
The IoT market is primed to add AI/ML, but the industry does not seem positioned to take advantage of this interest. We have seen what is required to expand a market: fit-for-purpose platforms, accessible tools, and strong support of an open source community.
This is an open call to the IoT and Edge AI ecosystem: let’s work together to lower the barriers, close the gaps, and unlock the full potential of Edge AI. We look forward to hearing from you!

Nebu Philips is Senior Director, Technical Product Marketing at Synaptics, where he is responsible for overseeing strategic initiatives and driving business growth in IoT and Edge processor solutions.
There’s plenty of other editorial on our sister site, Electronic Specifier! Or you can always join in the conversation by visiting our LinkedIn page.