Winbond Electronics Corporation has introduced groundbreaking technology designed to empower affordable Edge AI computing in mainstream applications. The company’s novel ultra-bandwidth architecture, known as CUBE (Customized Ultra-Bandwidth Elements), optimizes memory technology to deliver seamless performance for generative AI tasks in hybrid Edge/cloud environments.
CUBE significantly boosts the performance of front-end 3D structures, including chip-on-wafer (CoW) and wafer-on-wafer (WoW), as well as back-end solutions like 2.5D/3D chip-on-Si-interposer-on-substrate and fan-out configurations. Engineered to cater to the escalating demands of Edge AI computing devices, CUBE is compatible with memory densities ranging from 256Mb to 8Gb within a single die. Furthermore, it can be 3D stacked to enhance bandwidth while simultaneously reducing data transfer power consumption.
Winbond is taking a major step forward with CUBE, enabling seamless deployment across various platforms and interfaces. The technology is suited to advanced applications such as wearable and Edge server devices, surveillance equipment, ADAS, and co-robots
“The CUBE architecture enables a paradigm shift in AI deployment,” says Winbond. “We believe that the integration of cloud AI and powerful Edge AI will define the next phase of AI development. With CUBE, we are unlocking new possibilities and paving the way for improved memory performance and cost optimization on powerful Edge AI device.”
CUBE’s key features include:
- Power efficiency: CUBE delivers exceptional power efficiency, consuming less than 1pJ/bit, ensuring extended operation and optimized energy usage.
- Superior performance: With bandwidth capabilities ranging from 32GB/s to 256GB/s per die, CUBE ensures accelerated performance that exceeds industry standards.
- now and 16nm in 2025. This allows CUBE to fit into smaller form factors seamlessly. The introduction of through-silicon vias (TSVs) further enhances performance, improving signal and power integrity. Additionally, it reduces the IO area through a smaller pad pitch, as well as heat dissipation, especially when using SoC on the top die and CUBE on the bottom die.
- Cost-Effective Solution with High Bandwidth: Achieving outstanding cost-effectiveness, the CUBE IO boosts an impressive data rate of up to 2Gbps with total 1K IO. When paired with legacy foundry processes like 28nm/22nm SoC, CUBE unleashes ultra-high bandwidth capabilities, reaching an astounding 32GBs-256GB/s (=HBM2 Bandwidth), equivalent to harnessing the power of 4-32pcs*LP-DDR4x 4266Mbps x16 IO bandwidth.
- Reduction in SoC Die Size for Improved Cost Efficiency: By stacking the SoC (top die without TSV) atop the CUBE (bottom die with TSV), it becomes possible to minimize the SoC die size, eliminating any TSV penalty area. This not only enhances cost advantages, but also contributes to the overall efficiency, including small form factor of Edge AI devices.
“CUBE can unleash the full potential of hybrid Edge/cloud AI to elevate system capabilities, response time, and energy efficiency,” Winbond added. “Winbond’s commitment to innovation and collaboration will enable developers and enterprises to drive advancement across various industries.”
Winbond is actively engaging with partner companies to establish the 3DCaaS platform, which will leverage CUBE’s capabilities. By integrating CUBE with existing technologies, Winbond aims to offer cutting-Edge solutions that empower businesses to thrive in the era of AI-driven transformation.
There’s plenty of other editorial on our sister site, Electronic Specifier! Or you can always join in the conversation by commenting below or visiting our LinkedIn page.