AniMaker: Automated Multi-Agent Animated Storytelling with MCTS-Driven Clip Generation
Abstract
AniMaker, a multi-agent framework using MCTS-Gen and AniEval, generates coherent storytelling videos from text input, outperforming existing models with better quality and efficiency.
Despite rapid advancements in video generation models, generating coherent storytelling videos that span multiple scenes and characters remains challenging. Current methods often rigidly convert pre-generated keyframes into fixed-length clips, resulting in disjointed narratives and pacing issues. Furthermore, the inherent instability of video generation models means that even a single low-quality clip can significantly degrade the entire output animation's logical coherence and visual continuity. To overcome these obstacles, we introduce AniMaker, a multi-agent framework enabling efficient multi-candidate clip generation and storytelling-aware clip selection, thus creating globally consistent and story-coherent animation solely from text input. The framework is structured around specialized agents, including the Director Agent for storyboard generation, the Photography Agent for video clip generation, the Reviewer Agent for evaluation, and the Post-Production Agent for editing and voiceover. Central to AniMaker's approach are two key technical components: MCTS-Gen in Photography Agent, an efficient Monte Carlo Tree Search (MCTS)-inspired strategy that intelligently navigates the candidate space to generate high-potential clips while optimizing resource usage; and AniEval in Reviewer Agent, the first framework specifically designed for multi-shot animation evaluation, which assesses critical aspects such as story-level consistency, action completion, and animation-specific features by considering each clip in the context of its preceding and succeeding clips. Experiments demonstrate that AniMaker achieves superior quality as measured by popular metrics including VBench and our proposed AniEval framework, while significantly improving the efficiency of multi-candidate generation, pushing AI-generated storytelling animation closer to production standards.
Community
We are thrilled to introduce AniMaker, the newest version of Anim-Director, an advanced framework designed for long video generation. By framing the entire video creation process as a continuous space search problem, AniMaker achieves high-level consistency and coherence across extended cinematic sequences.
is this process fast or slow?
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- AnimeShooter: A Multi-Shot Animation Dataset for Reference-Guided Video Generation (2025)
- STORYANCHORS: Generating Consistent Multi-Scene Story Frames for Long-Form Narratives (2025)
- CineVerse: Consistent Keyframe Synthesis for Cinematic Scene Composition (2025)
- Stealing Creator's Workflow: A Creator-Inspired Agentic Framework with Iterative Feedback Loop for Improved Scientific Short-form Generation (2025)
- Action2Dialogue: Generating Character-Centric Narratives from Scene-Level Prompts (2025)
- A Multi-Agent AI Framework for Immersive Audiobook Production through Spatial Audio and Neural Narration (2025)
- ViStoryBench: Comprehensive Benchmark Suite for Story Visualization (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper