Dataset converted ideas
Can you share me the code and how to convert LIBERO dataset to this format which store observation in mp4 files? Why can it save a lot of space in compare with LIBERO dataset implemented by lerobot (https://huggingface.co/datasets/lerobot/libero_object_image)?
Thank for your information. I've used any4lerobot to convert my dataset from LIBERO hdf5 data to lerobot format.
I've tried on 4 datasets (LIBERO 10, spatial, goal and object). It converted successfully in most case but it showed bugs sometimes related to videos decode.
I also see that each of dataset in 4 datasets above have 10 tasks, each task we have 50 demos so that for each datasets, we will have 500 episodes but some datasets in your collection doesn't have enough 500 episodes
For example:
IPEC-COMMUNITY/libero_spatial_no_noops_lerobot: have 434 episodes
IPEC-COMMUNITY/libero_object_no_noops_lerobot: have 452 episodes
Have your met the same error? Can you share me how to fix it?
I’ve also encountered the video decoding issue you’re describing, and it seems to occur somewhat randomly. The most straightforward workaround I’ve found is to re-encode the image sequence for the problematic episode back into a video. Interestingly, running the exact same command again on the regenerated video usually resolves the issue.
Additionally, you might want to try a different video encoding method. Currently, we are using
av1
encoding, which has shown occasional video corruption problems. However, I haven’t seen this happen with encoders like X264 so far.At the moment, this issue only occurs with the LIBERO dataset. Videos encoded from other datasets seem to be fine.
Regarding the discrepancy in the number of episodes in the repo, this is due to the difference in datasets used. It looks like you are using the original LIBERO h5 files, whereas I am using a filtered version processed by the OpenVLA script, that's why we call it
libero_no_noops
. You can check OpenVLA for details—it applies some filtering, which results in fewer episodes.