Jordan Ellenberg was recently published in an article in Nature. He collaborated with a group of researchers working with Google Deepmind to train artificial intelligence (AI) to better evaluate responses. This is intended to better train AI to consider how to answer the question and offer fact based responses instead of “hallucinating” data. The work offers FunSearch (short for searching in the function aspace), an evolutionary procedure based on pairing a pre-trained LLM with a systematic evaluator.
In contrast to most computer search approaches, FunSearch searches for programs that describe how to solve a problem, rather than what the solution is. Beyond being an effective and scalable strategy, discovered programs tend to be more interpretable than raw solutions, enabling feedback loops between domain experts and FunSearch, and the deployment of such programs in real-world applications.