|
--- |
|
license: mit |
|
task_categories: |
|
- robotics |
|
--- |
|
|
|
<div align="center"> |
|
<h2>Magma: A Foundation Model for Multimodal AI Agents</h2> |
|
|
|
[Jianwei Yang](https://jwyang.github.io/)<sup>*</sup><sup>1</sup><sup>†</sup> |
|
[Reuben Tan](https://cs-people.bu.edu/rxtan/)<sup>1</sup><sup>†</sup> |
|
[Qianhui Wu](https://qianhuiwu.github.io/)<sup>1</sup><sup>†</sup> |
|
[Ruijie Zheng](https://ruijiezheng.com/)<sup>2</sup><sup>‡</sup> |
|
[Baolin Peng](https://scholar.google.com/citations?user=u1CNjgwAAAAJ&hl=en&oi=ao)<sup>1</sup><sup>‡</sup> |
|
[Yongyuan Liang](https://cheryyunl.github.io)<sup>2</sup><sup>‡</sup> |
|
|
|
[Yu Gu](http://yu-gu.me/)<sup>1</sup> |
|
[Mu Cai](https://pages.cs.wisc.edu/~mucai/)<sup>3</sup> |
|
[Seonghyeon Ye](https://seonghyeonye.github.io/)<sup>4</sup> |
|
[Joel Jang](https://joeljang.github.io/)<sup>5</sup> |
|
[Yuquan Deng](https://scholar.google.com/citations?user=LTC0Q6YAAAAJ&hl=en)<sup>5</sup> |
|
[Lars Liden](https://sites.google.com/site/larsliden)<sup>1</sup> |
|
[Jianfeng Gao](https://www.microsoft.com/en-us/research/people/jfgao/)<sup>1</sup><sup>▽</sup> |
|
|
|
<sup>1</sup> Microsoft Research; <sup>2</sup> University of Maryland; <sup>3</sup> University of Wisconsin-Madison |
|
<sup>4</sup> KAIST; <sup>5</sup> University of Washington |
|
|
|
<sup>*</sup> Project lead <sup>†</sup> First authors <sup>‡</sup> Second authors <sup>▽</sup> Leadership |
|
|
|
\[[arXiv Paper](https://www.arxiv.org/pdf/2502.13130)\] \[[Project Page](https://microsoft.github.io/Magma/)\] \[[Hugging Face Paper](https://huggingface.co/papers/2502.13130)\] \[[Github Repo](https://github.com/microsoft/Magma)\] \[[Video](https://www.youtube.com/watch?v=SbfzvUU5yM8)\] |
|
|
|
</div> |
|
|
|
## Introduction |
|
|
|
This dataset contains the robotic manipulation data used in Magma pretraining. For fair comparison, we followed OpenVLA to use the data mix "siglip-224px+mx-oxe-magic-soup". |
|
|
|
The dataset is organized by following source datasets, with each source containing one or more arrow files: |
|
|
|
| Folder | Number of Shards | |
|
|:------------------------------------------------------|-------------------:| |
|
| austin_buds_dataset_converted_externally_to_rlds | 1 | |
|
| austin_sailor_dataset_converted_externally_to_rlds | 4 | |
|
| austin_sirius_dataset_converted_externally_to_rlds | 3 | |
|
| berkeley_autolab_ur5 | 1 | |
|
| berkeley_cable_routing | 1 | |
|
| berkeley_fanuc_manipulation | 1 | |
|
| bridge_orig | 17 | |
|
| cmu_stretch | 1 | |
|
| dlr_edan_shared_control_converted_externally_to_rlds | 1 | |
|
| fractal20220817_data | 21 | |
|
| furniture_bench_dataset_converted_externally_to_rlds | 4 | |
|
| iamlab_cmu_pickup_insert_converted_externally_to_rlds | 2 | |
|
| jaco_play | 1 | |
|
| kuka | 21 | |
|
| language_table | 8 | |
|
| nyu_franka_play_dataset_converted_externally_to_rlds | 1 | |
|
| roboturk | 3 | |
|
| stanford_hydra_dataset_converted_externally_to_rlds | 4 | |
|
| taco_play | 3 | |
|
| toto | 3 | |
|
| ucsd_kitchen_dataset_converted_externally_to_rlds | 1 | |
|
| utaustin_mutex | 4 | |
|
| viola | 1 | |
|
|
|
|
|
### Features |
|
|
|
In addition to the default features, we extracted the visual traces of future 16 frames for each frame. The dataset contains the following fields: |
|
|
|
- `dataset_name`: Original source dataset name |
|
- `image`: Image of the robot scene (binary) |
|
- `task_string`: Description of the task |
|
- `frame_index`: Index of the frame in the video |
|
- `traj_index`: Index of the trajectory in the dataset |
|
- `action`: Robot action vector (serialized numpy array) |
|
- `trace`: Robot trajectory trace (serialized numpy array) |
|
- `trace_visibility`: Visibility mask for the trace (serialized numpy array) |
|
|
|
## Dataset Loading |
|
|
|
### Full Dataset Load |
|
|
|
```py |
|
from datasets import load_dataset |
|
dataset = load_dataset("MagmaAI/Magma-OXE-ToM", streaming=True, split="train") |
|
``` |
|
|
|
### Individual Dataset Load |
|
or specify a dataset by: |
|
|
|
```py |
|
from datasets import load_dataset |
|
dataset = load_dataset("MagmaAI/Magma-OXE-ToM", data_dir="austin_buds_dataset_converted_externally_to_rlds", streaming=True, split="train") |
|
``` |
|
|
|
### Sample Decoding |
|
|
|
```py |
|
# Helper function to deserialize binary fields |
|
def deserialize_array(bytes_data): |
|
return pickle.loads(bytes_data) |
|
|
|
# Helper function to convert binary image data to PIL Image |
|
def bytes_to_image(image_bytes): |
|
return Image.open(io.BytesIO(image_bytes)) |
|
|
|
for i, example in enumerate(dataset): |
|
# decode the image: 256 x 256 x 3 |
|
image = bytes_to_image(example['image']) |
|
# decode action: 1 x 7 |
|
action = deserialize_array(example['action']) |
|
# decode trace: 1 x 17 x 256 x 2 |
|
trace = deserialize_array(example['trace']) |
|
# decode trace visibility: 1 x 17 x 256 x 1 |
|
trace_visibility = deserialize_array(example['trace_visibility']) |
|
``` |