Xilinx and Motovis have announced that they are collaborating on a solution that pairs the Xilinx Automotive (XA) Zynq System-on-Chip (SoC) platform and Motovis’ Convolutional Neural Network (CNN) IP to the automotive market, specifically for forward camera systems’ vehicle perception and control.
The solution builds upon Xilinx’s corporate initiative to provide customers with robust platforms to enhance and speed development.
Forward camera systems are a critical element of advanced driver-assistance systems because they provide the advanced sensing capabilities required for safety-critical functions, including Lane-Keeping Assistance (LKA), Automatic Emergency Braking (AEB), and Adaptive Cruise Control (ACC).
The solution, which is available now, supports a range of parameters necessary for the European New Car Assessment Program (NCAP) 2022 requirements by utilising convolutional neural networks to achieve a cost-effective combination of low-latency image processing, flexibility and scalability.
“This collaboration is a significant milestone for the forward camera market as it will allow automotive OEMs to innovate faster,” said Ian Riches, Vice President for the Global Automotive Practice at Strategy Analytics.
“The forward camera market has tremendous growth opportunity, where we anticipate almost 20 percent year-on-year volume growth over 2020 to 2025. Together, Xilinx and Motovis are delivering a highly optimised hardware and software solution that will greatly serve the needs of automotive OEMs, especially as new standards emerge and requirements continue to grow.”
The forward camera solution scales across the 28nm and 16nm XA Zynq SoC families using Motovis’ CNN IP, a unique combination of optimized hardware and software partitioning capabilities with customisable CNN-specific engines that host Motovis’ deep learning networks – resulting in a cost effective offering at different performance levels and price points.
The solution supports image resolutions up to eight megapixels. For the first time, OEMs and Tier-1 suppliers can now layer their own feature algorithms on top of Motovis’ perception stack to differentiate and future-proof their designs.
“We are extremely pleased to unveil this new initiative with Xilinx and to bring to market our CNN forward camera solution. Customers designing systems enabled with AEB and LKA functionality need efficient neural network processing within an SoC that gives them flexibility to implement future features easily,” said Dr. Zhenghua Yu, CEO, Motovis.
“With Motovis’ customisable deep learning networks and the Xilinx Zynq platform’s ability to host CNN-specific engines that provide unmatched efficiency and optimisation, we’re helping to future-proof the design to meet customer needs.”
Market forces continue to drive adoption of forward camera systems to adhere to global government mandates and consumer watch groups including The European Commission General Safety Regulation, the National Highway Traffic Safety Administration and the NCAP. All three have issued formal mandates or strong guidance regarding automakers’ implementations of LKA and AEB in new vehicles produced between 2020-2025 and onward.
“Expanding our XA offering with a comprehensive solution for the forward camera market puts a cost optimised, high performance solution in the hands of our customers. We’re thrilled to bring this to life and drive the industry forward,” said Willard Tu, Senior Director of Automotive, Xilinx.
“Motovis’ expertise in embedded deep learning and how they’ve optimised neural networks to handle the immense challenges of forward camera perception puts us both in a unique position to gain market share, all while accelerating our OEM customers’ time to market.”
Xilinx and Motovis will be speaking at the Xilinx Adapt 2021 virtual event on 15th September, 2021. Adapt 2021 will feature executive keynotes with appearances from partners and customers, along with a series of more than 100 presentations, forums, product trainings and labs designed to help users unlock the value of adaptive computing.