After OpenAI, Google DeepMind’s Gemini Wins Gold at Math Olympiad

Last week, OpenAI stated that an experimental reasoning model achieved a gold-medal level performance.

After OpenAI, Google DeepMind’s Gemini Wins Gold at Math Olympiad
(Image-Freepik)

Google DeepMind’s latest advancement in artificial intelligence has reached a new milestone: solving International Mathematical Olympiad (IMO)-level problems with gold-medal performance.

In a blog post, the company announced that its advanced Gemini model, equipped with the “Deep Think” reasoning system, has achieved a gold medal score on the official IMO, the world’s most prestigious high school math competition. This breakthrough marks a leap in mathematical reasoning and problem-solving capabilities for large language models (LLMs).

Interestingly, last week, OpenAI stated that an experimental reasoning model of theirs has achieved a gold-medal level performance at the 2025 IMO.

DeepMind stated, “To make the most of the reasoning capabilities of Deep Think, we additionally trained this version of Gemini on novel reinforcement learning techniques that can leverage more multi-step reasoning, problem-solving, and theorem-proving data."

DeepMind also provided the model with access to a curated set of high-quality mathematical problem solutions and incorporated general hints and strategies for approaching IMO problems into its training instructions.

"We'll be making a version of this Deep Think model available to a set of trusted testers, including mathematicians, before rolling it out to Google AI Ultra subscribers," Google DeepMind CEO Demis Hassabis said.

Last year, DeepMind’s AlphaProof and AlphaGeometry 2 systems collectively achieved silver-medal performance at the International Mathematical Olympiad by solving four out of six challenging problems—two in algebra, one in number theory, and a geometry puzzle—scoring 28 out of 42 points under expert evaluation.