MathWorks and Altera recently announced a collaboration to accelerate wireless development for Altera FGPAs by allowing wireless systems engineers to use AI-based autoencoders to compress Channel State Information (CSI) data and significantly reduce fronthaul traffic and bandwidth requirements. Engineers working on 5G and 6G wireless communications systems can now ensure user data integrity and maintain the reliability and performance of wireless communications systems while reducing costs.
“The collaboration between MathWorks and Altera enables organisations to harness the power of AI for a wide range of 5G and 6G wireless communications applications, from 5G RAN to advanced driver-assistance systems (ADAS),” said Mike Fitton, Vice President and GM, Vertical Markets, Altera. “By utilising our FPGA AI suite and MathWorks software, developers can streamline their workflow from algorithm design to hardware implementation, ensuring their AI-based wireless systems meet the rigorous demands of modern applications.”
MathWorks offers a tool suite that enhances AI and wireless development, especially for Altera FGPAs. Deep Learning HDL Toolbox addresses the needs of engineers looking to implement deep learning networks on FGPA hardware. Harnessing the capabilities of HDL Coder, this toolbox empowers users to customise, build, and deploy an efficient, high performance Deep Learning Processor IP Core.
“AI-enabled compression is a powerful technology for the telecommunications industry,” said Houman Zarrinkoub, Principal Product Manager, MathWorks. “MathWorks software offers a robust foundation for AI and wireless development. By integrating our tools with Altera’s FPGA technologies, wireless engineers can efficiently create high-performance AI applications and advanced 5G and 6G wireless systems.”
FPGA AI Suite features push-button custom AI inference accelerator IP generation on Altera FPGAs using the OpenVINO toolkit, utilising pre-trained AI models from popular industry frameworks. It further helps FPGA developers integrate AI inference accelerator IP seamlessly into FPGA design using best-in-class Quartus Prime Software FPGA flows. Combining the Deep Learning Toolbox and the OpenVINO toolkit creates a streamlined path for developers to optimise AI inference on Altera FPGAs.
There’s plenty of other editorial on our sister site, Electronic Specifier! Or you can always join in the conversation by commenting below or visiting our LinkedIn page.