LoHoVLA: A Unified Vision-Language-Action Model for Long-Horizon Embodied Tasks
Abstract
A unified vision language action framework, LoHoVLA, combines a large pretrained vision language model with hierarchical closed-loop control to improve performance on long-horizon embodied tasks.
Real-world embodied agents face long-horizon tasks, characterized by high-level goals demanding multi-step solutions beyond single actions. Successfully navigating these requires both high-level task planning (i.e., decomposing goals into sub-tasks) and low-level motion control (i.e., generating precise robot actions). While existing vision language action (VLA) models and hierarchical architectures offer potential in embodied tasks, the former often falter in planning, and the latter can suffer from coordination issues, both hampering performance. We introduce a new unified VLA framework for long-horizon tasks, dubbed LoHoVLA, to overcome these limitations. LoHoVLA leverages a large pretrained vision language model (VLM) as the backbone to jointly generate language and action tokens for sub-task generation and robot action prediction, respectively. This shared representation promotes better generalization across tasks. Additionally, LoHoVLA embraces a hierarchical closed-loop control mechanism to mitigate errors originating from both high-level planning and low-level control. To train LoHoVLA, we introduce LoHoSet, a dataset built on the Ravens simulator, containing 20 long-horizon tasks, each with 1,000 expert demonstrations composed of visual observations, linguistic goals, sub-tasks, and robot actions. Experimental results show that LoHoVLA significantly surpasses both hierarchical and standard VLA approaches on long-horizon embodied tasks in the Ravens simulator. These findings underscore the promise of unified architectures for advancing generalizable embodied intelligence.
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- NORA: A Small Open-Sourced Generalist Vision Language Action Model for Embodied Tasks (2025)
- RoboFAC: A Comprehensive Framework for Robotic Failure Analysis and Correction (2025)
- Agentic Robot: A Brain-Inspired Framework for Vision-Language-Action Models in Embodied Agents (2025)
- OneTwoVLA: A Unified Vision-Language-Action Model with Adaptive Reasoning (2025)
- InSpire: Vision-Language-Action Models with Intrinsic Spatial Reasoning (2025)
- EMAC+: Embodied Multimodal Agent for Collaborative Planning with VLM+LLM (2025)
- DeCo: Task Decomposition and Skill Composition for Zero-Shot Generalization in Long-Horizon 3D Manipulation (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper