GAIA is a benchmark designed to evaluate AI assistants on real-world tasks that require a combination of core capabilities—such as reasoning, multimodal understanding, web browsing, and proficient tool use.
It was introduced in the paper ”GAIA: A Benchmark for General AI Assistants”.
The benchmark features 466 carefully curated questions that are conceptually simple for humans, yet remarkably challenging for current AI systems.
To illustrate the gap:
GAIA highlights the current limitations of AI models and provides a rigorous benchmark to evaluate progress toward truly general-purpose AI assistants.
GAIA is carefully designed around the following pillars:
GAIA tasks are organized into three levels of increasing complexity, each testing specific skills:

Which of the fruits shown in the 2008 painting “Embroidery from Uzbekistan” were served as part of the October 1949 breakfast menu for the ocean liner that was later used as a floating prop for the film “The Last Voyage”? Give the items as a comma-separated list, ordering them in clockwise order based on their arrangement in the painting starting from the 12 o’clock position. Use the plural form of each fruit.
As you can see, this question challenges AI systems in several ways:
This kind of task highlights where standalone LLMs often fall short, making GAIA an ideal benchmark for agent-based systems that can reason, retrieve, and execute over multiple steps and modalities.

To encourage continuous benchmarking, GAIA provides a public leaderboard hosted on Hugging Face, where you can test your models against 300 testing questions.
👉 Check out the leaderboard here
Want to dive deeper into GAIA?