---
license: mit
size_categories:
- "1G < size <10G"
language:
- en
---
Fast-UMI: A Scalable and Hardware-Independent Universal Manipulation Interface
Welcome to the official repository of FastUMI!
---
## 📋 Contents
- [🔥 News](#🔥-news)
- [🏠 How to Collect Data](#🏠-how-to-collect-data)
- [📦 How to Use the Dataset](#📦-how-to-use-the-dataset)
- [📚 Dataset Structure](#📚-dataset-structure)
- [📂 A. Splitting Data](#📂-a-splitting-data)
- [💡 B. Merging Data](#💡-b-merging-data)
- [🔧 Usage](#🔧-usage)
- [License](#license)
- [Contact](#contact)
## 🔥 News
- **[2024-12]** We released Data Collection Code and Dataset.
## 🏠 How to Collect Data
The full data collection pipeline, including instructions and code, is available on our [GitHub repository](https://github.com/YdingTeam/FastUMI_Data).
## 📦 How to Use the Dataset
Due to Hugging Face's file size limitation of 50GB per file, the dataset has been split into smaller parts. Users need to merge the files after downloading to reconstruct the original dataset.
## 📚 Dataset Structure
Purpose: Each HDF5 file corresponds to a single episode and encapsulates both observational data and actions. Below is the hierarchical structure of the HDF5 file:
```
episode_.hdf5
├── observations/
│ ├── images/
│ │ └── (Dataset)
│ └── qpos (Dataset)
├── action (Dataset)
└── attributes/
└── sim = False
```
Attributes:
sim
- Type: Boolean
- Value: False
- Description: Indicates whether the data was recorded in simulation (True) or real-world (False).
Groups and Datasets:
observations/
- images/
- Description: Stores image data from camera.
- Datasets:
- front
- Type: Dataset containing image arrays.
- Shape: (num_frames, height=1920, width=1080, channels=3)
- Data Type: uint8
- Compression: gzip with compression level 4.
- qpos
- Type: Dataset
- Shape: (num_timesteps, 7)
- Description: Stores position and orientation data for each timestep.
- Columns: [Pos X, Pos Y, Pos Z, Q_X, Q_Y, Q_Z, Q_W]
- action
- Type: Dataset
- Shape: (num_timesteps, 7)
- Description: Stores action data corresponding to each timestep. In this script, actions mirror the qpos data.
- Columns: [Pos X, Pos Y, Pos Z, Q_X, Q_Y, Q_Z, Q_W]
### 📂 A. Splitting Data
The data is split to ensure each part remains below the 50GB limit. The splitting process divides large `.tar.gz` files into smaller chunks.
**Splitting Overview:**
- **Method**: Use file splitting tools or commands to divide large files into manageable parts.
- **Example Tool**: `split` command in Unix-based systems.
**Example Command:**
```bash
split -b 8G FastUMI_Data.tar.gz FastUMI_Data.tar.gz.part-
```
This command splits `FastUMI_Data.tar.gz` into 8GB parts with filenames starting with `FastUMI_Data.tar.gz.part-`.
### 💡 B. Merging Data
After downloading the split files, users need to merge them to reconstruct the original dataset.
**Merging Instructions:**
1. **Navigate to the Download Directory:**
```bash
cd path_to_downloaded_files
```
2. **Merge Files Using `cat`:**
Use the `cat` command to concatenate the split parts. Replace `filename.tar.gz.part-001`, `filename.tar.gz.part-002`, etc., with your actual file names.
```bash
cat filename.tar.gz.part-* > filename.tar.gz
```
**Example:**
```bash
cat FastUMI_Data.tar.gz.part-* > FastUMI_Data.tar.gz
```
3. **Alternatively, Use the Provided Python Script to Automate Merging:**
Save the following script as `merge_files.py`:
```python
import os
import glob
def merge_files(part_pattern, output_file):
"""
Merges split file parts into a single file.
:param part_pattern: Pattern matching the split file parts, e.g., "filename.tar.gz.part-*"
:param output_file: Name of the output merged file, e.g., "filename.tar.gz"
"""
parts = sorted(glob.glob(part_pattern))
if not parts:
raise FileNotFoundError(f"No parts found for pattern: {part_pattern}")
with open(output_file, 'wb') as outfile:
for part in parts:
print(f"Merging {part} into {output_file}")
with open(part, 'rb') as infile:
while True:
chunk = infile.read(1024 * 1024) # 1MB
if not chunk:
break
outfile.write(chunk)
print(f"Merge completed: {output_file}")
if __name__ == "__main__":
import argparse
parser = argparse.ArgumentParser(description="Merge split file parts into a single file.")
parser.add_argument('--pattern', type=str, required=True, help='Pattern of split file parts, e.g., "filename.tar.gz.part-*"')
parser.add_argument('--output', type=str, required=True, help='Name of the output merged file, e.g., "filename.tar.gz"')
args = parser.parse_args()
merge_files(args.pattern, args.output)
```
**Usage:**
1. **Run the Merging Script:**
```bash
python merge_files.py --pattern "filename.tar.gz.part-*" --output "filename.tar.gz"
```
Replace `filename.tar.gz.part-*` and `filename.tar.gz` with your actual file name pattern and desired output file name.
2. **Example:**
```bash
python merge_files.py --pattern "FastUMI_Data.tar.gz.part-*" --output "FastUMI_Data.tar.gz"
```
4. **Verify the Merged File:**
Ensure that the merged file size matches the original file size before splitting. You can use the `ls -lh` command to check file sizes.
```bash
ls -lh FastUMI_Data.tar.gz
```
5. **Extract the Dataset:**
Once merged, extract the dataset using the `tar` command:
```bash
tar -xzvf FastUMI_Data.tar.gz
```
## 🔧 Usage
After successfully merging and extracting the dataset, you can utilize it for training and evaluating robotic manipulation models. Detailed methodologies and application examples are available on the [Project Page](https://fastumi.com/) and in the [Early Version PDF](https://arxiv.org/abs/2409.19499).
## License
This project is licensed under the [MIT License](https://opensource.org/licenses/MIT).
## Contact
For questions or feedback, please reach out to the yding25@binghamton.edu or visit our [website](https://fastumi.com/).