Dataset Viewer
Auto-converted to Parquet
Search is not available for this dataset
video
video
label
class label
2 classes
0observation.images.side
0observation.images.side
0observation.images.side
0observation.images.side
0observation.images.side
0observation.images.side
0observation.images.side
0observation.images.side
0observation.images.side
0observation.images.side
0observation.images.side
0observation.images.side
0observation.images.side
0observation.images.side
0observation.images.side
0observation.images.side
0observation.images.side
0observation.images.side
0observation.images.side
0observation.images.side
0observation.images.side
0observation.images.side
0observation.images.side
0observation.images.side
0observation.images.side
0observation.images.side
0observation.images.side
0observation.images.side
0observation.images.side
0observation.images.side
0observation.images.side
0observation.images.side
0observation.images.side
0observation.images.side
0observation.images.side
0observation.images.side
0observation.images.side
0observation.images.side
0observation.images.side
0observation.images.side
0observation.images.side
0observation.images.side
0observation.images.side
0observation.images.side
0observation.images.side
0observation.images.side
0observation.images.side
0observation.images.side
0observation.images.side
0observation.images.side
0observation.images.side
0observation.images.side
0observation.images.side
0observation.images.side
0observation.images.side
0observation.images.side
0observation.images.side
0observation.images.side
0observation.images.side
0observation.images.side
0observation.images.side
0observation.images.side
0observation.images.side
0observation.images.side
0observation.images.side
0observation.images.side
0observation.images.side
0observation.images.side
0observation.images.side
0observation.images.side
0observation.images.side
0observation.images.side
0observation.images.side
0observation.images.side
0observation.images.side
0observation.images.side
0observation.images.side
0observation.images.side
0observation.images.side
0observation.images.side
0observation.images.side
0observation.images.side
1observation.images.top
1observation.images.top
1observation.images.top
1observation.images.top
1observation.images.top
1observation.images.top
1observation.images.top
1observation.images.top
1observation.images.top
1observation.images.top
1observation.images.top
1observation.images.top
1observation.images.top
1observation.images.top
1observation.images.top
1observation.images.top
1observation.images.top
1observation.images.top
End of preview. Expand in Data Studio

Dataset Overview

  • The dataset was created by the team Lebotica during LeRobot Worldwide Hackathon and used for training the SmolVLA model on structured robotic manipulation prompts
  • The dataset consists of 82 tasks and 2 instructions.
  • You can check the demo of the trained SmolVLA in the Hackathon Demo Page (Team number: 76).

Dataset Structure

├── data
│   └── chunk-000
│       ├── episode_000000.parquet
│       ├── ...
│       └── episode_000081.parquet
├── meta
│   ├── episodes.jsonl
│   ├── episodes_stats.jsonl
│   ├── info.json
│   └── tasks.jsonl
└── videos
    └── chunk-000
        ├── observation.images.side
        │   ├── episode_000000.mp4
        │   ├── ...
        │   └── episode_000081.mp4
        └── observation.images.top
            ├── episode_000000.mp4
            ├── ...
            └── episode_000081.mp4

The tasks.json file contains an array of 84 task prompts. Each prompt follows a structured template for robotic manipulation. Example prompts include:

  1. Pick-and-Place Task

    Pick a (red | blue | green) ball from the (top | middle | bottom)-(left | center | right) and place in the (red | blue | green) plate.
    
  2. Stacking Task

    Stack the bowls with coloring order from (red | green | blue) -> (red | green | blue) -> (red | green | blue) to the front of the robot.
    

Usage

To use this dataset for training SmolVLA:

  1. First, install the required dependencies:

    git clone https://github.com/huggingface/lerobot.git
    cd lerobot
    pip install -e ".[smolvla]"
    
  2. Train SmolVLA

    python lerobot/scripts/train.py \
     --dataset.repo_id=ITHwangg/svla_koch_pickplace_and_stacking \
     --policy.path=lerobot/smolvla_base \
     --num_workers=8 \
     --batch_size=64 \
     --steps=100000 \
     --eval_freq=500 \
     --log_freq=10 \
     --save_freq=500 \
     --save_checkpoint=true
    
  3. Caution

    • Currently, the python script refers to the branch named v2.1.
    • Every data/chunk-000/*.parquet has only the task index 0 so you should map epicode indexes to task indexes one by one:
      # lerobot/lerobot/common/datasets/lerobot_dataset.py
      class LeRobotDataset(torch.utils.data.Dataset):
         def __init__(
             self,
             repo_id: str,
             root: str | Path | None = None,
             episodes: list[int] | None = None,
             image_transforms: Callable | None = None,
             delta_timestamps: dict[list[float]] | None = None,
             tolerance_s: float = 1e-4,
             revision: str | None = None,
             force_cache_sync: bool = False,
             download_videos: bool = True,
             video_backend: str | None = None,
         ):
      
         ...
      
         # Load actual data
         try:
             if force_cache_sync:
                 raise FileNotFoundError
             assert all((self.root / fpath).is_file() for fpath in self.get_episodes_file_paths())
             self.hf_dataset = self.load_hf_dataset()
         except (AssertionError, FileNotFoundError, NotADirectoryError):
             self.revision = get_safe_version(self.repo_id, self.revision)
             self.download_episodes(download_videos)
             self.hf_dataset = self.load_hf_dataset()
      
         # HERE ###########################
         # After loading the dataset and setting up episode_data_index
         if self.hf_dataset is not None:
             # Create a new column with task_index = episode_index
             new_task_index = torch.stack(self.hf_dataset["episode_index"])
             self.hf_dataset = self.hf_dataset.map(
                 lambda x, idx: {"task_index": new_task_index[idx]}, with_indices=True
             )
         ##################################
      
         self.episode_data_index = get_episode_data_index(self.meta.episodes, self.episodes)
      
         ...
      

License

This dataset is released under the MIT License.

Downloads last month
58

Models trained or fine-tuned on ITHwangg/svla_koch_pickplace_and_stacking