UniVLA
Collection
All you need to get started with UniVLA
β’
5 items
β’
Updated
β’
1
This is the official checkpoint of our RSS 2025 work: Learning to Act Anywhere with Task-centric Latent Actions
This is the UniVLA pre-trained on BridgeV2 (we used the version in Open-X GCP Bucket). For finetuning on simulation benchmarks or your customized dataset, please visit our official repo.
If you find our code or models useful in your work, please cite our paper:
@article{bu2025univla,
title={UniVLA: Learning to Act Anywhere with Task-centric Latent Actions},
author={Qingwen Bu and Yanting Yang and Jisong Cai and Shenyuan Gao and Guanghui Ren and Maoqing Yao and Ping Luo and Hongyang Li},
journal={arXiv preprint arXiv:2505.06111},
year={2025}
}