Ceva adds NPUs for AIoT devices

Ceva, a provider of silicon and software IP for Smart Edge devices, unveiled an expansion to its Ceva-NeuPro family of Edge AI NPUs by introducing the Ceva-NeuPro-Nano NPUs. These NPUs, designed for high efficiency and self-sufficiency, are tailored to enable semiconductor firms and OEMs to incorporate TinyML models into their SoCs for an array of consumer, industrial, and general-purpose AIoT products.

TinyML is defined as the application of machine learning models on devices that are limited in power and resources, enhancing the capabilities of the Internet of Things (IoT). With a surge in demand for effective and specialised AI solutions within IoT devices, the TinyML market is set to expand significantly.

Research by ABI Research predicts that by 2030, more than 40% of TinyML deployments will rely on specialised TinyML hardware, moving away from generic MCUs. The Ceva-NeuPro-Nano NPUs are specifically engineered to address the performance hurdles associated with TinyML, aiming to make AI widespread, cost-effective, and feasible across diverse applications such as voice recognition, visual processing, predictive maintenance, and health monitoring in both consumer and industrial IoT settings.

The newly developed Ceva-NeuPro-Nano Embedded AI NPU architecture boasts full programmability and excels in executing neural networks, feature extraction, control code, and DSP code. It supports the most advanced machine learning data types and operations, including native transformer computations, sparsity acceleration, and rapid quantisation.

This advanced, autonomous architecture allows the Ceva-NeuPro-Nano NPUs to provide superior power efficiency, a reduced silicon footprint, and enhanced performance compared to traditional processor solutions typically used for TinyML tasks, which often combine CPU or DSP with AI accelerator architectures. Additionally, Ceva’s NetSqueeze AI compression technology processes compressed model weights directly, eliminating the need for an intermediate decompression stage, thereby achieving up to an 80% reduction in memory footprint. This critical advancement helps overcome a major limitation hindering the widespread adoption of AIoT processors.

“Ceva-NeuPro-Nano opens exciting opportunities for companies to integrate TinyML applications into low-power IoT SoCs and MCUs and builds on our strategy to empower smart Edge devices with advanced connectivity, sensing and inference capabilities. The Ceva-NeuPro-Nano family of NPUs enables more companies to bring AI to the very Edge, resulting in intelligent IoT devices with advanced feature sets that capture more value for our customers,” said Chad Lucien, Vice President and General Manager of the Sensors and Audio Business Unit at Ceva. “By leveraging our industry-leading position in wireless IoT connectivity and strong expertise in audio and vision sensing, we are uniquely positioned to help our customers unlock the potential of TinyML to enable innovative solutions that enhance user experiences, improve efficiencies, and contribute to a smarter, more connected world.”

According to Paul Schell, Industry Analyst at ABI Research: “Ceva-NeuPro-Nano is a compelling solution for on-device AI in smart Edge IoT devices. It addresses the power, performance, and cost requirements to enable always-on use-cases on battery-operated devices integrating voice, vision, and sensing use cases across a wide array of end markets. From TWS earbuds, headsets, wearables, and smart speakers to industrial sensors, smart appliances, home automation devices, cameras, and more, Ceva-NeuPro-Nano enables TinyML in energy constrained AIoT devices.”

The Ceva-NeuPro-Nano NPU is available in two models: the Ceva-NPN32 with 32 int8 MACs, and the Ceva-NPN64 with 64 int8 MACs, both enhanced by Ceva-NetSqueeze for direct processing of compressed model weights. The Ceva-NPN32 is optimised for most TinyML workloads, focusing on voice, audio, object detection, and anomaly detection. In contrast, the Ceva-NPN64 offers doubled performance acceleration through weight sparsity, increased memory bandwidth, more MACs, and support for 4-bit weights, designed for more complex AI tasks like object classification, face detection, speech recognition, and health monitoring.

The NPUs are complemented by a comprehensive AI SDK, the Ceva-NeuPro Studio, which provides a consistent toolkit across the entire Ceva-NeuPro NPU family and supports open AI frameworks such as TensorFlow Lite for Microcontrollers (TFLM) and microTVM (µTVM).

There’s plenty of other editorial on our sister site, Electronic Specifier! Or you can always join in the conversation by commenting below or visiting our LinkedIn page.

Exit mobile version