Intel Signs Deal to Deploy Xeon Processors Across Google's Cloud Infrastructure
The companies will also align across multiple generations of Xeon chips to improve performance, energy efficiency, and total cost of ownership.
Intel and Google have announced an expanded multiyear collaboration to advance the next generation of AI and cloud infrastructure, as demand for scalable AI systems accelerates globally.
The partnership will see continued deployment of Intel’s Xeon processors across Google Cloud’s infrastructure, supporting a wide range of workloads, including AI training coordination, inference, and general-purpose computing.
"As AI adoption accelerates, infrastructure is becoming more complex and heterogeneous, driving increased reliance on CPUs for orchestration, data processing and system-level performance," Intel said in a statement.
The companies will also align across multiple generations of Xeon chips to improve performance, energy efficiency, and total cost of ownership.
Alongside this, the two firms are expanding joint development of custom infrastructure processing units (IPUs), specialised chips designed to offload networking, storage, and security functions from CPUs.
These accelerators are expected to improve system efficiency and enable more predictable performance across hyperscale AI environments.
“AI is reshaping how infrastructure is built and scaled. Scaling AI requires more than accelerators - it requires balanced systems. CPUs and IPUs are central to delivering the performance, efficiency and flexibility modern AI workloads demand,” Lip-Bu Tan, Intel CEO, said.
“CPUs and infrastructure acceleration remain a cornerstone of AI systems—from training orchestration to inference and deployment. Intel has been a trusted partner for nearly two decades, and their Xeon roadmap gives us confidence that we can continue to meet the growing performance and efficiency demands of our workloads,” Amin Vahdat, Google SVP & Chief Technologist, AI Infrastructure, added.
Together, Intel and Google aim to build a more efficient and flexible foundation for the next wave of AI-driven cloud services.