OpenAI Says No Plans to Use Google’s AI Chips at Scale Amid Testing Phase

Earlier this month, it was reported that OpenAI has struck a deal with Google Cloud to meet its rising computing demands.

OpenAI Says No Plans to Use Google’s AI Chips at Scale Amid Testing Phase

OpenAI has clarified it has no immediate plans to adopt Google’s in-house AI chips, just days after reports surfaced suggesting the ChatGPT-maker was considering using Google’s tensor processing units (TPUs) to meet surging compute demands.

Earlier this month, it was reported that OpenAI has struck a deal with Google Cloud to meet its rising computing demands.

A spokesperson for OpenAI confirmed on Sunday that while the company is conducting early tests with Google’s TPUs, there are currently “no plans to deploy them at scale.”

Although testing various chips is standard practice among AI companies, scaling up new hardware typically requires substantial changes in architecture and software infrastructure.

OpenAI remains heavily reliant on Nvidia GPUs and AMD’s AI chips. It is also working on developing its own custom AI chip, aiming to reach the "tape-out" stage this year — the point at which the chip design is finalized for manufacturing.

Despite testing Google’s chips, OpenAI is a customer of Google Cloud, as previously reported by Reuters.

However, the bulk of its compute needs are being met by GPU servers operated by CoreWeave, a leading AI-focused cloud provider.

Recently, Oracle Corporation announced a major cloud services contract expected to generate over $30 billion in annual revenue starting fiscal year 2028.

While Oracle did not name the customer, industry observers point to OpenAI as the likely buyer.