DeepSeek R2 Rumored to Be Trained Using Huawei's AI Chips

R2 will be 97.3% more affordable than ChatGPT-4 for enterprise use

DeepSeek R2 Rumored to Be Trained Using Huawei's AI Chips

DeepSeek, the company that sent Silicon Valley into a meltdown with its open-source Large Language Model (LLM), is rumored to be training its next model—R2—using Huawei Ascend 910B chips.

The model showed 82% utilisation of the processor cluster and delivered a peak performance of 512 PetaFLOPS of FP16 precision.

If China succeeds in producing a world-class AI model with domestically developed chips, it would be a clear indication that U.S. policies aimed at limiting China’s access to cutting-edge technology have failed.

By relying on homegrown solutions like Huawei’s Ascend 910B, DeepSeek would demonstrate that China can overcome geopolitical barriers and emerge as a major player in the global AI race, challenging the very policies that were designed to hold it back.

The rumours come at a time when U.S. President Donald Trump is putting restrictions on China in the form of tariffs to limit China's technological progress.

Reports also suggest Huawei Technologies is preparing to start mass shipments of its new 910C chip to Chinese clients as early as next month. Some initial deliveries have already taken place.

DeepSeek R2 model is set to double the parameters of its predecessor, R1, with an impressive 1.2 trillion parameters. Remarkably, R2 will be 97.3% more affordable than ChatGPT-4 for enterprise use, priced at $0.07 per million input tokens and $0.27 per million output tokens.

If confirmed, the DeepSeek R2 will position itself as the most cost-effective LLM on the market, challenging rivals like GPT-4 Turbo and Gemini 2.0.