Nvidia and partners gear up for 800Vdc datacentres and Vera Rubin

Partners include:

Silicon providers: Analog Devices, Inc (ADI), AOS, EPC, Infineon, Innoscience, MPS, Navitas, onsemi, Power Integrations, Renesas, Richtek, Rohm, STMicroelectronics and Texas Instruments

Power system component providers: BizLink, Delta, Flex, GE Vernova, Lead Wealth, LITEON and Megmeet

Datacentre power system providers: ABB, Eaton, GE Vernova, Heron Power, Hitachi Energy, Mitsubishi Electric, Schneider Electric, Siemens and Vertiv

800V direct current (VDC) datacentres of the gigawatt era that will support the NVIDIA Kyber rack architecture.

Foxconn provided details on its 40-MW Taiwan datacentre, Kaohsiung-1, being built for 800Vdc. CoreWeave, Lambda, Nebius, Oracle Cloud Infrastructure and Together AI are among other industry pioneers designing for 800V datacentres.

In addition, Vertiv unveiled its space-, cost- and energy-efficient 800Vdc MGX reference architecture, a complete power and cooling infrastructure architecture. HPE is announcing product support for Nvidia Kyber as well as Nvidia Spectrum-XGS Ethernet scale-across technology, part of the Spectrum-X Ethernet platform.

Moving to 800Vdc infrastructure from traditional 415 or 480 VAC three-phase systems offer increased scalability, improved energy efficiency, reduced materials usage and higher capacity for performance in datacentres. The electric vehicle and solar industries have already adopted 800Vdc infrastructure for similar benefits.

The Open Compute Project, founded by Meta, is an industry consortium of hundreds of computing and networking providers and more focused on redesigning hardware technology to efficiently support the growing demands on compute infrastructure.

The Vera Rubin NVL144 MGX compute tray offers an energy-efficient, 100% liquid-cooled, modular design. Its central printed circuit board midplane replaces traditional cable-based connections for faster assembly and serviceability, with modular expansion bays for Nvidia ConnectX-9 800GB/s networking and Nvidia Rubin CPX for massive-context inference.

The Nvidia Vera Rubin NVL144 offers a major leap in accelerated computing architecture and AI performance. It’s built for advanced reasoning engines and the demands of AI agents.

Its fundamental design lives in the MGX rack architecture and will be supported by 50+ MGX system and component partners. Nvidia plans to contribute the upgraded rack as well as the compute tray innovations as an open standard for the OCP consortium.

Its standards for compute trays and racks enable partners to mix and match in modular fashion and scale faster with the architecture. The Vera Rubin NVL144 rack design features energy-efficient 45°C liquid cooling, a new liquid-cooled busbar for higher performance and 20-times more energy storage to keep power steady.

The MGX upgrades to compute tray and rack architecture boost AI factory performance while simplifying assembly, enabling a rapid ramp-up to gigawatt-scale AI infrastructure.

Nvidia is a leading contributor to OCP standards across multiple hardware generations, including key portions of the Nvidia GB200 NVL72 system electro-mechanical design. The same MGX rack footprint supports GB300 NVL72 and will support Vera Rubin NVL144, Vera Rubin NVL144 CPX and Vera Rubin CPX for higher performance and fast deployments.

The OCP ecosystem is also preparing for Nvidia Kyber, featuring innovations in 800Vdc power delivery, liquid cooling and mechanical design.

These innovations will support the move to rack server generation Nvidia Kyber — the successor to Nvidia Oberon — which will house a high-density platform of 576 Nvidia Rubin Ultra GPUs by 2027.

The most effective way to counter the challenges of high-power distribution is to increase the voltage. Transitioning from a traditional 415 or 480 VAC three-phase system to an 800Vdc architecture offers various benefits.

The transition afoot enables rack server partners to move from 54Vdc in-rack components to 800Vdc for better results. An ecosystem of direct current infrastructure providers, power system and cooling partners, and silicon makers — all aligned on open standards for the MGX rack server reference architecture — attended the event.

Nvidia Kyber is engineered to boost rack GPU density, scale up network size and maximise performance for large-scale AI infrastructure. By rotating compute blades vertically, like books on a shelf, Kyber enables up to 18 compute blades per chassis, while purpose-built Nvidia NVLink switch blades are integrated at the back via a cable-free midplane for seamless scale-up networking.

Over 150% more power is transmitted through the same copper with 800Vdc, enabling eliminating the need for 200kg copper busbars to feed a single rack.

Kyber will become a foundational element of hyperscale AI datacentres, enabling superior performance, efficiency and reliability for state-of-the-art generative AI workloads in the coming years. Nvidia Kyber racks offer a way for customers to reduce the amount of copper they use by the tons, leading to millions of dollars in cost savings.

In addition to hardware, Nvidia NVLink Fusion is gaining momentum, enabling companies to seamlessly integrate their semi-custom silicon into highly optimised and widely deployed datacentre architecture, reducing complexity and accelerating time to market.

Intel and Samsung Foundry are joining the NVLink Fusion ecosystem that includes custom silicon designers, CPU and IP partners, so that AI factories can scale up quickly to handle demanding workloads for model training and agentic AI inference.

  • As part of the recently announced Nvidia and Intel collaboration, Intel will build x86 CPUs that integrate into Nvidia infrastructure platforms using NVLink Fusion.
  • Samsung Foundry has partnered with Nvidia to meet growing demand for custom CPUs and custom XPUs, offering design-to-manufacturing experience for custom silicon.

 

Source

Guidantech | Smart Gadgets, Tech Reviews & How-To Guides
Logo
Shopping cart