Dita: Scaling Diffusion Transformer for Generalist Vision-Language-Action Policy
Abstract
While recent vision-language-action models trained on diverse robot datasets exhibit promising generalization capabilities with limited in-domain data, their reliance on compact action heads to predict discretized or continuous actions constrains adaptability to heterogeneous action spaces. We present Dita, a scalable framework that leverages Transformer architectures to directly denoise continuous action sequences through a unified multimodal diffusion process. Departing from prior methods that condition denoising on fused embeddings via shallow networks, Dita employs in-context conditioning -- enabling fine-grained alignment between denoised actions and raw visual tokens from historical observations. This design explicitly models action deltas and environmental nuances. By scaling the diffusion action denoiser alongside the Transformer's scalability, Dita effectively integrates cross-embodiment datasets across diverse camera perspectives, observation scenes, tasks, and action spaces. Such synergy enhances robustness against various variances and facilitates the successful execution of long-horizon tasks. Evaluations across extensive benchmarks demonstrate state-of-the-art or comparative performance in simulation. Notably, Dita achieves robust real-world adaptation to environmental variances and complex long-horizon tasks through 10-shot finetuning, using only third-person camera inputs. The architecture establishes a versatile, lightweight and open-source baseline for generalist robot policy learning. Project Page: https://robodita.github.io.
Community
Dita(http://robodita.github.io/) is an open-source, simple yet effective policy for generalist robotic learning:
- Dita enables 10-shot adaptation to complex, multitask, long-horizon scenarios in novel robot setups, and demonstrates remarkable robustness against complex object arrangements and even challenging lighting conditions in sophisticated 3D pick-and-rotation tasks.
- Dita seamlessly scales to a wide range of popular simulation benchmarks (SimplerEnv, Calvin, Libero and Maniskill2) achieving state-of-the-art performance across these tasks .
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- HybridVLA: Collaborative Diffusion and Autoregression in a Unified Vision-Language-Action Model (2025)
- Fine-Tuning Vision-Language-Action Models: Optimizing Speed and Success (2025)
- DexVLA: Vision-Language Model with Plug-In Diffusion Expert for General Robot Control (2025)
- Towards Fast, Memory-based and Data-Efficient Vision-Language Policy (2025)
- SpatialVLA: Exploring Spatial Representations for Visual-Language-Action Model (2025)
- OTTER: A Vision-Language-Action Model with Text-Aware Visual Feature Extraction (2025)
- PointVLA: Injecting the 3D World into Vision-Language-Action Models (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper