Terence Tao along with other researchers just published a paper on using AlphaEvolve to tackle math problems. The system combines LLMs with an evolutionary algorithm to automatically generate and test mathematical constructions.

You give it a problem and say, “find a set of points on a sphere with the highest possible minimum distance”, then instead of you tweaking coordinates by hand for weeks, AlphaEvolve writes little programs that search for better and better solutions. It’s using the LLM to mutate high-scoring programs to create new candidates, while the evaluator scores each program based on performance.

In several cases, AlphaEvolve found constructions that were already known, but it did so in hours instead of years. In some problems, like the finite field Kakeya sets or certain packing problems, it even found new constructions that slightly improved existing bounds. And at other times its output was so suggestive that human mathematicians were able to generalize the pattern and turn it into a rigorous proof.

It’s not perfect, it struggles with problems that require deep conceptual leaps, and it can sometimes “cheat” by exploiting the evaluation setup. However, when it works, it dramatically speeds up the experimental side of math. The authors even combined it with other AI tools like Deep Think and AlphaProof to go all the way from discovery to formal verification.

    • ☆ Yσɠƚԋσʂ ☆@lemmy.mlOP
      link
      fedilink
      arrow-up
      0
      ·
      11 days ago

      Oh that’s pretty awesome, I’d be interested to see this approach applied for coding agents as well. You could make a language that focuses on specifying a formal contract the agent has to fill, and then you could have LLM and evaluator converge on a solution.