Nvidia is set to showcase its ‘Grid-to-Chip’ philosophy at the Open Compute Project Global Summit, outlining a new direction for AI infrastructure. This includes the unveiling of the next-generation Vera Rubin MGX architecture and Spectrum-XGS Ethernet designed for giga-scale AI factories. Nvidia aims to integrate its technologies across chip, networking, data center infrastructure, and software orchestration, establishing itself as a connective organization within the AI technology stack. Senior product marketing manager, Joe Dellaera, emphasizes the need for a unified design of networking, computing, mechanical design, and power systems to maximize efficiency and profitability from AI factories.
The Vera Rubin MGX architecture combines Nvidia’s Vera CPU with Rubin CPX GPU in a flexible server design that supports various configurations. Notably, this architecture is expected to deliver approximately eight times the performance improvement over the existing GB300, enhancing efficiency in assembly and maintenance through a cable-free design. Analysts from Moor Insights & Strategy praise the simplicity and modular design of the MGX rack design, which significantly boosts operational efficiency for companies managing large numbers of racks. Meanwhile, Nvidia prepares for the transition to 800VDC power infrastructure, promising improved scalability and energy efficiency in the evolving data center environment, which it deems critical as lower energy supply becomes a new competitive factor.
👉 Pročitaj original: CIO Magazine