CoreWeave Becomes First to Deploy NVIDIA Blackwell AI Chips

CoreWeave plans to make GB200 NVL72 instances available to customers later this year.

CoreWeave Becomes First to Deploy NVIDIA Blackwell AI Chips

Cloud infrastructure leader CoreWeave has announced the world’s first deployment of NVIDIA’s GB200 NVL72, a groundbreaking AI supercomputing platform designed to handle the next generation of large-scale, AI workloads.

The announcement places CoreWeave at the forefront of accelerated computing and positions it as a critical player in powering frontier AI.

The GB200 NVL72 is a rack-scale system that combines 72 Blackwell GPUs with 36 Grace CPUs, interconnected by NVIDIA’s latest NVLink and NVSwitch technologies.

This tight integration enables up to 30X faster performance compared to traditional AI platforms, allowing developers to train trillion-parameter models more efficiently and at scale.

“We’re proud to be the first cloud provider globally to deploy NVIDIA’s GB200 NVL72,” said CoreWeave CEO Michael Intrator. “It’s a massive leap forward in high-performance computing, tailor-made for today’s most demanding AI workloads.”

CoreWeave plans to make GB200 NVL72 instances available to customers later this year, opening up capabilities for enterprise AI, generative models, and scientific research.

At Computex 2025, NVIDIA CEO Jensen Huang introduced Lepton, a new AI platform designed to link developers with a global marketplace of GPU compute resources.

Lepton connects tens of thousands of GPUs from NVIDIA’s cloud partners—including CoreWeave, Foxconn GMI, Lambda Labs, SoftBank, and others—enabling developers to access regional compute capacity on demand or for long-term needs.

At the event, Huang also announced that the company’s next-generation GB300 AI systems will begin rolling out in Q3 2025.