Datasets:

Modalities:
Tabular
Text
Formats:
arrow
ArXiv:
Libraries:
Datasets
License:
Magma-OXE-ToM / README.md
jw2yang's picture
Update README.md
3abe42b verified
metadata
license: mit
task_categories:
  - robotics

Magma: A Foundation Model for Multimodal AI Agents

Jianwei Yang*1  Reuben Tan1  Qianhui Wu1  Ruijie Zheng2  Baolin Peng1  Yongyuan Liang2

Yu Gu1  Mu Cai3  Seonghyeon Ye4  Joel Jang5  Yuquan Deng5  Lars Liden1  Jianfeng Gao1

1 Microsoft Research; 2 University of Maryland; 3 University of Wisconsin-Madison
4 KAIST; 5 University of Washington

* Project lead First authors Second authors Leadership

[arXiv Paper]   [Project Page]   [Hugging Face Paper]   [Github Repo]   [Video]

Introduction

This dataset contains the robotic manipulation data used in Magma pretraining. For fair comparison, we followed OpenVLA to use the data mix "siglip-224px+mx-oxe-magic-soup".

The dataset is organized by following source datasets, with each source containing one or more arrow files:

Folder Number of Shards
austin_buds_dataset_converted_externally_to_rlds 1
austin_sailor_dataset_converted_externally_to_rlds 4
austin_sirius_dataset_converted_externally_to_rlds 3
berkeley_autolab_ur5 1
berkeley_cable_routing 1
berkeley_fanuc_manipulation 1
bridge_orig 17
cmu_stretch 1
dlr_edan_shared_control_converted_externally_to_rlds 1
fractal20220817_data 21
furniture_bench_dataset_converted_externally_to_rlds 4
iamlab_cmu_pickup_insert_converted_externally_to_rlds 2
jaco_play 1
kuka 21
language_table 8
nyu_franka_play_dataset_converted_externally_to_rlds 1
roboturk 3
stanford_hydra_dataset_converted_externally_to_rlds 4
taco_play 3
toto 3
ucsd_kitchen_dataset_converted_externally_to_rlds 1
utaustin_mutex 4
viola 1

Features

In addition to the default features, we extracted the visual traces of future 16 frames for each frame. The dataset contains the following fields:

  • dataset_name: Original source dataset name
  • image: Image of the robot scene (binary)
  • task_string: Description of the task
  • frame_index: Index of the frame in the video
  • traj_index: Index of the trajectory in the dataset
  • action: Robot action vector (serialized numpy array)
  • trace: Robot trajectory trace (serialized numpy array)
  • trace_visibility: Visibility mask for the trace (serialized numpy array)

Dataset Loading

Full Dataset Load

from datasets import load_dataset
dataset = load_dataset("MagmaAI/Magma-OXE-ToM", streaming=True, split="train")

Individual Dataset Load

or specify a dataset by:

from datasets import load_dataset
dataset = load_dataset("MagmaAI/Magma-OXE-ToM", data_dir="austin_buds_dataset_converted_externally_to_rlds", streaming=True, split="train")

Sample Decoding

# Helper function to deserialize binary fields
def deserialize_array(bytes_data):
    return pickle.loads(bytes_data)

# Helper function to convert binary image data to PIL Image
def bytes_to_image(image_bytes):
    return Image.open(io.BytesIO(image_bytes))

for i, example in enumerate(dataset):   
    # decode the image: 256 x 256 x 3
    image = bytes_to_image(example['image'])
    # decode action: 1 x 7
    action = deserialize_array(example['action'])
    # decode trace: 1 x 17 x 256 x 2
    trace = deserialize_array(example['trace'])
    # decode trace visibility: 1 x 17 x 256 x 1
    trace_visibility = deserialize_array(example['trace_visibility'])