Ad image

Did xAI lie about Grok 3’s benchmarks?

3 Min Read

Discussions on AI benchmarks and how they are reported by AI Labs are publicly available.

Openai employee this week defendant Elon Musk’s AI Company, Xai, has published misleading benchmark results for its latest AI model, the Grok 3. Igor Babushkin, one of Xai’s co-founders; I insisted The company was on the right.

The truth lies somewhere in between.

in Please post on Xai’s blogthe company has published a graph showing the performance of the Grok 3 at AIME 2025, a collection of challenging mathematics questions from the recent Invitational Mathematics exam. Some experts have The effectiveness of AIIME has been questioned as an AI benchmark. Nevertheless, AIME 2025 and above versions of the test are commonly used to investigate the mathematical capabilities of models.

The Xai graph showed two variants of Grok 3, Grok 3 Reasoning Beta, and Grok 3 mini inference. Defeated Openai’s most performant available model O3-Mini-High in Aime 2025. “Cons@64” did not include O3-Mini-High’s AIME 2025 score.

What is Cons @64? Well, it stands for “Consensus @64” and basically gives you a model 64 that tries to answer each question in the benchmark, and receives the answer that is generated most frequently as the final answer. As you can imagine, Cons@64 tends to significantly increase the benchmark score of a model, and if you omit it from the graph it might seem as if one model actually outweighs another.

The AIME 2025 score for “@1” for Grok 3 Reasoning Beta and Grok 3 Mini Reasoning (the first score the model won on the benchmark) is below the O3-Mini-High score. The Grok 3 Reasoning Beta is heading backwards more than ever for “medium” computing to Openai’s O1 model set. But Xai is Advertising Grok 3 As the “The smartest AI in the world.”

Babshkin Discussed about x Openai has previously published similarly misleading benchmark charts, although the chart compares the performance of its own models. In the discussion, we’ve put together a more “precise” graph showing the performance of almost every model at Cons@64.

However, as AI researcher Nathan Lambert It was pointed out in the postperhaps the most important metric remains a mystery. The calculation (and currency) cost that each model took to achieve the highest score. This simply shows how little most AI benchmarks communicate about the limitations of the model and their strengths.

Share This Article
Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Exit mobile version