Papers
arxiv:2503.18809

Classical Planning with LLM-Generated Heuristics: Challenging the State of the Art with Python Code

Published on Mar 24
· Submitted by abcorrea on Apr 1

Abstract

In recent years, large language models (LLMs) have shown remarkable capabilities in various artificial intelligence problems. However, they fail to plan reliably, even when prompted with a detailed definition of the planning task. Attempts to improve their planning capabilities, such as chain-of-thought prompting, fine-tuning, and explicit "reasoning" still yield incorrect plans and usually fail to generalize to larger tasks. In this paper, we show how to use LLMs to generate correct plans, even for out-of-distribution tasks of increasing size. For a given planning domain, we ask an LLM to generate several domain-dependent heuristic functions in the form of Python code, evaluate them on a set of training tasks within a greedy best-first search, and choose the strongest one. The resulting LLM-generated heuristics solve many more unseen test tasks than state-of-the-art domain-independent heuristics for classical planning. They are even competitive with the strongest learning algorithm for domain-dependent planning. These findings are especially remarkable given that our proof-of-concept implementation is based on an unoptimized Python planner and the baselines all build upon highly optimized C++ code. In some domains, the LLM-generated heuristics expand fewer states than the baselines, revealing that they are not only efficiently computable, but sometimes even more informative than the state-of-the-art heuristics. Overall, our results show that sampling a set of planning heuristic function programs can significantly improve the planning capabilities of LLMs.

Community

Paper author Paper submitter

Our work proposes a new method for generating domain-specific heuristics for classical planning using LLMs. Here are the key points:

  • Traditional classical planners depend heavily on the quality of the heuristics used. Heuristics are usually either domain-independent (general but less accurate), manually crafted (requiring expertise), or learned per domain (computationally costly).
  • We propose using LLMs to generate domain-dependent heuristics from domain descriptions, example tasks, and code samples. The LLM is prompted to output Python code for a heuristic function. Our method is inspired by AlphaCode-style candidate sampling.
  • Unlike prior work, the generated heuristic is reusable per domain, significantly reducing inference cost.
  • LLM-generated heuristics outperform classical planning heuristics implemented in state-of-the-art planners such as Fast Downward, which is written in C++. They also outperform the best heuristic learning methods from the classical planning literature.
  • Our work demonstrates the potential of LLMs to automate and enhance heuristic design in classical planning.
Your need to confirm your account before you can post a new comment.

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2503.18809 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2503.18809 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2503.18809 in a Space README.md to link it from this page.

Collections including this paper 1