ITHwangg/lebotica-pickplace-v2-step1k
Updated
video
video | label
class label 2
classes |
---|---|
0observation.images.side
|
|
0observation.images.side
|
|
0observation.images.side
|
|
0observation.images.side
|
|
0observation.images.side
|
|
0observation.images.side
|
|
0observation.images.side
|
|
0observation.images.side
|
|
0observation.images.side
|
|
0observation.images.side
|
|
0observation.images.side
|
|
0observation.images.side
|
|
0observation.images.side
|
|
0observation.images.side
|
|
0observation.images.side
|
|
0observation.images.side
|
|
0observation.images.side
|
|
0observation.images.side
|
|
0observation.images.side
|
|
0observation.images.side
|
|
0observation.images.side
|
|
0observation.images.side
|
|
0observation.images.side
|
|
0observation.images.side
|
|
0observation.images.side
|
|
0observation.images.side
|
|
0observation.images.side
|
|
0observation.images.side
|
|
0observation.images.side
|
|
0observation.images.side
|
|
0observation.images.side
|
|
0observation.images.side
|
|
0observation.images.side
|
|
0observation.images.side
|
|
0observation.images.side
|
|
0observation.images.side
|
|
0observation.images.side
|
|
0observation.images.side
|
|
0observation.images.side
|
|
0observation.images.side
|
|
0observation.images.side
|
|
0observation.images.side
|
|
0observation.images.side
|
|
0observation.images.side
|
|
0observation.images.side
|
|
0observation.images.side
|
|
0observation.images.side
|
|
0observation.images.side
|
|
0observation.images.side
|
|
0observation.images.side
|
|
0observation.images.side
|
|
0observation.images.side
|
|
0observation.images.side
|
|
0observation.images.side
|
|
0observation.images.side
|
|
0observation.images.side
|
|
0observation.images.side
|
|
0observation.images.side
|
|
0observation.images.side
|
|
0observation.images.side
|
|
0observation.images.side
|
|
0observation.images.side
|
|
0observation.images.side
|
|
0observation.images.side
|
|
0observation.images.side
|
|
0observation.images.side
|
|
0observation.images.side
|
|
0observation.images.side
|
|
0observation.images.side
|
|
0observation.images.side
|
|
0observation.images.side
|
|
0observation.images.side
|
|
0observation.images.side
|
|
0observation.images.side
|
|
0observation.images.side
|
|
0observation.images.side
|
|
0observation.images.side
|
|
0observation.images.side
|
|
0observation.images.side
|
|
0observation.images.side
|
|
0observation.images.side
|
|
0observation.images.side
|
|
0observation.images.side
|
|
0observation.images.side
|
|
0observation.images.side
|
|
0observation.images.side
|
|
0observation.images.side
|
|
0observation.images.side
|
|
0observation.images.side
|
|
0observation.images.side
|
|
1observation.images.top
|
|
1observation.images.top
|
|
1observation.images.top
|
|
1observation.images.top
|
|
1observation.images.top
|
|
1observation.images.top
|
|
1observation.images.top
|
|
1observation.images.top
|
|
1observation.images.top
|
|
1observation.images.top
|
Lebotica
during LeRobot Worldwide Hackathon and used for training the SmolVLA model on structured robotic manipulation promptsβββ data
β βββ chunk-000
β βββ episode_000000.parquet
β βββ ...
β βββ episode_000089.parquet
βββ meta
β βββ episodes.jsonl
β βββ episodes_stats.jsonl
β βββ info.json
β βββ tasks.jsonl
βββ videos
βββ chunk-000
βββ observation.images.side
β βββ episode_000000.mp4
β βββ ...
β βββ episode_000089.mp4
βββ observation.images.top
βββ episode_000000.mp4
βββ ...
βββ episode_000089.mp4
The tasks.json
file contains an array of 90 task prompts. Each prompt follows a structured template for robotic manipulation.
Example prompt:
Pick a (red | blue | green) ball from the (top | middle | bottom)-(left | center | right) and place in the (red | blue | green) plate.
To use this dataset for training SmolVLA:
First, install the required dependencies:
git clone https://github.com/huggingface/lerobot.git
cd lerobot
pip install -e ".[smolvla]"
Train SmolVLA
python lerobot/scripts/train.py \
--dataset.repo_id=ITHwangg/svla_koch_pickplace_v2 \
--policy.path=lerobot/smolvla_base \
--num_workers=8 \
--batch_size=64 \
--steps=100000 \
--eval_freq=500 \
--log_freq=10 \
--save_freq=500 \
--save_checkpoint=true
Caution
v2.1
.data/chunk-000/*.parquet
has only the task index 0
so you should map epicode indexes to task indexes one by one:# lerobot/lerobot/common/datasets/lerobot_dataset.py
class LeRobotDataset(torch.utils.data.Dataset):
def __init__(
self,
repo_id: str,
root: str | Path | None = None,
episodes: list[int] | None = None,
image_transforms: Callable | None = None,
delta_timestamps: dict[list[float]] | None = None,
tolerance_s: float = 1e-4,
revision: str | None = None,
force_cache_sync: bool = False,
download_videos: bool = True,
video_backend: str | None = None,
):
...
# Load actual data
try:
if force_cache_sync:
raise FileNotFoundError
assert all((self.root / fpath).is_file() for fpath in self.get_episodes_file_paths())
self.hf_dataset = self.load_hf_dataset()
except (AssertionError, FileNotFoundError, NotADirectoryError):
self.revision = get_safe_version(self.repo_id, self.revision)
self.download_episodes(download_videos)
self.hf_dataset = self.load_hf_dataset()
# HERE ###########################
# After loading the dataset and setting up episode_data_index
if self.hf_dataset is not None:
# Create a new column with task_index = episode_index
new_task_index = torch.stack(self.hf_dataset["episode_index"])
self.hf_dataset = self.hf_dataset.map(
lambda x, idx: {"task_index": new_task_index[idx]}, with_indices=True
)
##################################
self.episode_data_index = get_episode_data_index(self.meta.episodes, self.episodes)
...
This dataset is released under the MIT License.