--- license: mit task_categories: - robotics ---

Magma: A Foundation Model for Multimodal AI Agents

[Jianwei Yang](https://jwyang.github.io/)*1  [Reuben Tan](https://cs-people.bu.edu/rxtan/)1  [Qianhui Wu](https://qianhuiwu.github.io/)1  [Ruijie Zheng](https://ruijiezheng.com/)2  [Baolin Peng](https://scholar.google.com/citations?user=u1CNjgwAAAAJ&hl=en&oi=ao)1  [Yongyuan Liang](https://cheryyunl.github.io)2 [Yu Gu](http://yu-gu.me/)1  [Mu Cai](https://pages.cs.wisc.edu/~mucai/)3  [Seonghyeon Ye](https://seonghyeonye.github.io/)4  [Joel Jang](https://joeljang.github.io/)5  [Yuquan Deng](https://scholar.google.com/citations?user=LTC0Q6YAAAAJ&hl=en)5  [Lars Liden](https://sites.google.com/site/larsliden)1  [Jianfeng Gao](https://www.microsoft.com/en-us/research/people/jfgao/)1 1 Microsoft Research; 2 University of Maryland; 3 University of Wisconsin-Madison 4 KAIST; 5 University of Washington * Project lead First authors Second authors Leadership \[[arXiv Paper](https://www.arxiv.org/pdf/2502.13130)\]   \[[Project Page](https://microsoft.github.io/Magma/)\]   \[[Hugging Face Paper](https://huggingface.co/papers/2502.13130)\]   \[[Github Repo](https://github.com/microsoft/Magma)\]   \[[Video](https://www.youtube.com/watch?v=SbfzvUU5yM8)\]
## Introduction This dataset contains the robotic manipulation data used in Magma pretraining. For fair comparison, we followed OpenVLA to use the data mix "siglip-224px+mx-oxe-magic-soup". The dataset is organized by following source datasets, with each source containing one or more arrow files: | Folder | Number of Shards | |:------------------------------------------------------|-------------------:| | austin_buds_dataset_converted_externally_to_rlds | 1 | | austin_sailor_dataset_converted_externally_to_rlds | 4 | | austin_sirius_dataset_converted_externally_to_rlds | 3 | | berkeley_autolab_ur5 | 1 | | berkeley_cable_routing | 1 | | berkeley_fanuc_manipulation | 1 | | bridge_orig | 17 | | cmu_stretch | 1 | | dlr_edan_shared_control_converted_externally_to_rlds | 1 | | fractal20220817_data | 21 | | furniture_bench_dataset_converted_externally_to_rlds | 4 | | iamlab_cmu_pickup_insert_converted_externally_to_rlds | 2 | | jaco_play | 1 | | kuka | 21 | | language_table | 8 | | nyu_franka_play_dataset_converted_externally_to_rlds | 1 | | roboturk | 3 | | stanford_hydra_dataset_converted_externally_to_rlds | 4 | | taco_play | 3 | | toto | 3 | | ucsd_kitchen_dataset_converted_externally_to_rlds | 1 | | utaustin_mutex | 4 | | viola | 1 | ### Features In addition to the default features, we extracted the visual traces of future 16 frames for each frame. The dataset contains the following fields: - `dataset_name`: Original source dataset name - `image`: Image of the robot scene (binary) - `task_string`: Description of the task - `frame_index`: Index of the frame in the video - `traj_index`: Index of the trajectory in the dataset - `action`: Robot action vector (serialized numpy array) - `trace`: Robot trajectory trace (serialized numpy array) - `trace_visibility`: Visibility mask for the trace (serialized numpy array) ## Dataset Loading ### Full Dataset Load ```py from datasets import load_dataset dataset = load_dataset("MagmaAI/Magma-OXE-ToM", streaming=True, split="train") ``` ### Individual Dataset Load or specify a dataset by: ```py from datasets import load_dataset dataset = load_dataset("MagmaAI/Magma-OXE-ToM", data_dir="austin_buds_dataset_converted_externally_to_rlds", streaming=True, split="train") ``` ### Sample Decoding ```py # Helper function to deserialize binary fields def deserialize_array(bytes_data): return pickle.loads(bytes_data) # Helper function to convert binary image data to PIL Image def bytes_to_image(image_bytes): return Image.open(io.BytesIO(image_bytes)) for i, example in enumerate(dataset): # decode the image: 256 x 256 x 3 image = bytes_to_image(example['image']) # decode action: 1 x 7 action = deserialize_array(example['action']) # decode trace: 1 x 17 x 256 x 2 trace = deserialize_array(example['trace']) # decode trace visibility: 1 x 17 x 256 x 1 trace_visibility = deserialize_array(example['trace_visibility']) ```