BlenderGym: Benchmarking Foundational Model Systems for Graphics Editing
Abstract
3D graphics editing is crucial in applications like movie production and game design, yet it remains a time-consuming process that demands highly specialized domain expertise. Automating this process is challenging because graphical editing requires performing a variety of tasks, each requiring distinct skill sets. Recently, vision-language models (VLMs) have emerged as a powerful framework for automating the editing process, but their development and evaluation are bottlenecked by the lack of a comprehensive benchmark that requires human-level perception and presents real-world editing complexity. In this work, we present BlenderGym, the first comprehensive VLM system benchmark for 3D graphics editing. BlenderGym evaluates VLM systems through code-based 3D reconstruction tasks. We evaluate closed- and open-source VLM systems and observe that even the state-of-the-art VLM system struggles with tasks relatively easy for human Blender users. Enabled by BlenderGym, we study how inference scaling techniques impact VLM's performance on graphics editing tasks. Notably, our findings reveal that the verifier used to guide the scaling of generation can itself be improved through inference scaling, complementing recent insights on inference scaling of LLM generation in coding and math tasks. We further show that inference compute is not uniformly effective and can be optimized by strategically distributing it between generation and verification.
Community
Which multimodal LLM should you be using to edit graphics in Blender?
Today, we’re releasing our #CVPR2025 Highlight🌟 work, #BlenderGym 🏋️♀️, the first agentic 3D graphics editing benchmark that will tell you exactly how multimodal LLMs compare in their Blender-editing skills.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Envisioning Beyond the Pixels: Benchmarking Reasoning-Informed Visual Editing (2025)
- Leveraging Large Language Models For Scalable Vector Graphics Processing: A Review (2025)
- SVGEditBench V2: A Benchmark for Instruction-based SVG Editing (2025)
- V2Edit: Versatile Video Diffusion Editor for Videos and 3D Scenes (2025)
- TikZero: Zero-Shot Text-Guided Graphics Program Synthesis (2025)
- POEM: Precise Object-level Editing via MLLM control (2025)
- GoT: Unleashing Reasoning Capability of Multimodal Large Language Model for Visual Generation and Editing (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 1
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper