Main Menu

News:

Welcome to the AI & AI prompt sharing forum!

DeepMind AI crushes tough maths problems and competing with top human solvers

Started by Admin, Feb 08, 2025, 01:31 PM

Previous topic - Next topic

0 Members and 2 Guests are viewing this topic.

Admin

A year ago, Google DeepMind's AI, AlphaGeometry, shocked everyone by solving math problems at the level of silver medalists in the International Mathematical Olympiad (IMO). That's a competition for top high school math students.



Now, DeepMind says its upgraded version, AlphaGeometry2, has gotten even better—strong enough to outperform the average gold medalist. A preprint on arXiv details the results.

"I don't think it'll be long before computers get perfect scores at the IMO," says Kevin Buzzard, a mathematician at Imperial College London.

The IMO covers four topics: Euclidean geometry, number theory, algebra, and combinatorics. Geometry is especially tough for AI because competitors must prove their solutions step by step. In July, DeepMind introduced AlphaGeometry2, along with a new system called AlphaProof, designed to tackle the non-geometry problems.

How It Works
AlphaGeometry combines a specialized language model with a reasoning system. Unlike typical AI, it doesn't just learn from data—it also follows built-in logic designed by humans. DeepMind trained the model to "speak" formal math, making it easier to check for mistakes and avoid AI's usual problem of making things up.

With AlphaGeometry2, DeepMind made big improvements. It now includes Google's powerful Gemini language model and can reason in new ways—like moving points on a diagram to change a shape's size or solving linear equations.

Computers are getting better at math. The question is, how far will they go?