OpenAI Says New Experimental Model Wins Gold at International Math Olympiad
Despite the breakthrough, OpenAI has stated that it does not plan to release the model anytime soon.

OpenAI’s latest experimental reasoning model has achieved a gold-medal level performance at the 2025 International Math Olympiad (IMO)—a first in AI history.
The model solved five out of six problems under authentic exam conditions: two 4.5‑hour sessions, no internet access, and official problem statements, earning 35/42 points, as confirmed by three former IMO medalists.
“I’m excited to share that our latest @OpenAI experimental reasoning LLM has achieved a longstanding grand challenge in AI: gold medal-level performance on the world’s most prestigious math competition” Alexander Wei, OpenAI researcher, said in a X post.
Unlike specialised systems like DeepMind’s AlphaGeometry, this was a general-purpose LLM using reinforcement learning and enhanced compute scaling techniques.
"In our evaluation, the model solved 5 of the 6 problems on the 2025 IMO. For each problem, three former IMO medalists independently graded the model’s submitted proof, with scores finalized after unanimous consensus. The model earned 35/42 points in total, enough for gold," Wei added.
OpenAI CEO Sam Altman hailed it as a milestone toward general intelligence:
“This is an LLM doing math and not a specific formal math system; it is part of our main push towards general intelligence.”
Despite the breakthrough, OpenAI has stated that it does not plan to release the high-performing math model anytime soon. The company considers it an experimental research system and aims to hold off on releasing such advanced mathematical capabilities for several more months.
While some experts caution about overinterpreting AI’s new reasoning prowess, this achievement marks a significant leap from grade-school benchmarks to mastering one of the toughest high‑school exams globally.
Comments ()