Eugene Vinitsky

Funsearch

Link to the paper!

Why this paper matters

This paper shows a key thing: LLMs can serve as hypothesis generators / a mechanism for local search, providing a direction of improvement with respect to a few samples. As discussed in weaknesses, I don’t think it establishes that LLMs are necessarily good at this, but they do seem capable of it. That’s exciting!

Summary

We would like to create a program that maximizes some desired property. The key here is that we have the ability to score each of these programs. To take an example, we might have a bin packing problem and would like to learn a program that suggests which bin we should place an item into next. An example score could then be the negative of the total number of bins used e.g. pack efficiently (note, this is not actually the score they used). A simplified version of the procedure, neglecting some details for clarity, then runs as follows:
  • 1. Start with M islands, each initialized with a few random programs.
  • 2. Sample over the islands with some weighting based on how high scoring each cluster is
  • 3. Within each island, sample 2 programs from the island with the sampling again weighted by how good each program is.
  • 4. Pass the programs to an LLM and say “here’s two programs, a first one and a second one that is better. Can you make a 3rd, better one?”
  • 5. Get the score for this new program and then return it back to its island.
  • 6. Repeat until you’ve created some new math.

    Note, the islands are being used to create some diversity in the procedure. Every 4 hours they kill half of the islands and restart them with new random programs.

    They then show that in 4/140 runs they manage to find a 512 size capset in dimension 8 (if you want to know what a capset is, the Set example on Wikipedia is pretty good: Cap set)and pretty consistently improve over some bin-packing heuristics.

    A really rough summary figure, taken from the paper is below. profile photo

    Follow-up work

    This paper, the ones out already like it (Beyond Human Data: Scaling Self-Training for Problem-Solving with Language Models) and the hordes of others like it yet to be published, clearly represent an incoming wave of LLMs hooked up to verifiers that can be used to automate processes. That's important because anytime you can automate something, well, that's productivity growth right there.

    Weaknesses

    a. LLMs in this paper serve two functions. They allow you to (1) generate arbitrary programs that are likely syntactically correct and (2) they also serve to generate program improvements. This former component serves as a pretty significant advantage over other automated approaches you might take to this problem like genetic algorithms or RL; those approaches are likely to spend a long time generating invalid programs. However, as a consequence, we’re not really sure how good the LLM is at the actual program improvement component. The paper provides evidence that they are doing some kind of program improvement but without a baseline or more careful study, it’s hard to tell exactly how good it is. I think there’s a really good paper to be published here trying to figure out just how creative LLMs are at generating local improvements in this way.

    b. It’s clearly pretty not robust.