Meta & AMD Strike Major AI Infrastructure Deal for 6 GW of Compute
Shipments for the first GPU deployments are slated to begin in the second half of 2026.
Meta has unveiled a major multi-year agreement with AMD to power its expanding artificial intelligence infrastructure with up to 6 gigawatts (GW) of AMD Instinct GPUs, marking a significant escalation in its AI compute strategy. Shipments for the first GPU deployments are slated to begin in the second half of 2026, using custom-tuned silicon and next-generation AMD hardware.
The agreement broadens an existing relationship between the tech giants and aligns their product roadmaps across silicon, systems, and software to deliver high-performance, energy-efficient compute tailored to Meta’s AI workloads. The infrastructure will be built on the AMD Helios rack-scale architecture co-developed with Meta through the Open Compute Project and will include both AMD Instinct GPUs and sixth-generation EPYC CPUs.
“We are proud to expand our strategic partnership with Meta as they push the boundaries of AI at unprecedented scale,” said Dr. Lisa Su, AMD’s Chair and CEO. “This multi-year, multi-generation collaboration … accelerates one of the industry’s largest AI deployments and places AMD at the center of the global AI buildout.”
Meta founder and CEO Mark Zuckerberg described the pact as a milestone in diversifying the company’s compute resources. “We’re excited to form a long-term partnership with AMD to deploy efficient inference compute and deliver personal superintelligence,” he said, adding that AMD “will be an important partner for many years to come.”
The push for diversified AI hardware comes as Meta scales AI services and infrastructure worldwide, reflecting broader industry trends where major tech firms secure extensive chip supply agreements beyond dominant suppliers.