Google AI has entered new territory — competing with the brightest young human minds at the International Math Olympiad (IMO) held in Australia. The company’s advanced language model, developed by its DeepMind division, participated in solving complex math problems alongside 630 human competitors from across the world.
For months, Google’s DeepMind team had trained a special variant of its Gemini model, named DeepThink, to tackle the Olympiad’s notoriously challenging problems. These tasks test creativity, abstract reasoning, and multi-step logic — areas where machines often struggle compared to humans. DeepMind’s goal was not merely to win but to push the boundaries of artificial reasoning and take a step closer to achieving artificial general intelligence (AGI) — the point where AI can think and reason like a human.
DeepMind CEO Demis Hassabis called AGI “one of the most important technologies humanity will ever create.” The IMO offered Google AI a unique testbed for that ambition, demanding innovation and reasoning that pure computational power can’t easily replicate.
During the contest, both humans and AI tackled problems from combinatorics to geometry. While several student competitors like 17-year-old Tiger Zhang from California achieved perfect scores on early problems, DeepThink also impressed — solving the first three challenges flawlessly.
However, the hardest problem of the event — a combinatorics puzzle — proved too difficult for both the AI and most human contestants. Only five students globally managed to solve it completely. DeepMind’s AI earned a gold-level ranking, tying with 46 human participants.
Experts say the results reveal both the progress and limitations of Google AI systems. While the technology has evolved to handle multi-step reasoning and “parallel thinking,” it still struggles with abstraction and creative leaps that define human intelligence.
OpenAI’s ChatGPT achieved identical results, reigniting debate over whether solving Olympiad problems truly reflects intelligence or specialized training. Some researchers, like New York University’s Ernest Davis, argue that such achievements, while impressive, are far from the “moon-landing moment” some claim.
Still, DeepMind sees this as a milestone. The company believes reinforcement learning — rewarding correct reasoning paths and penalizing errors — can lead to greater advances in machine reasoning. Recently, Google AI models have also performed well in global coding competitions, further proving their growing analytical capabilities.
As Google continues its pursuit of AGI, the competition between machines and humans is becoming less about rivalry and more about discovery. The next Olympiad may see Google AI returning, aiming for what even the best human minds find nearly impossible — a perfect score.
In other news also read about Google Unveils Leaked Gemini Overhaul With Interactive Interface