The dataset viewer is not available for this split.
Error code: FeaturesError Exception: ArrowInvalid Message: Schema at index 1 was different: folder_path: string n_components: int64 batch_size: int64 max_episodes: int64 num_samples: int64 vs parameters: struct<obs: string, samples: int64, path: string, obj: string, shuffle: bool> plans: list<item: struct<target: string, actions: list<item: string>, skill: string>> stats: struct<min_episode_length: int64, avg_episode_length: double, max_episode_length: int64, ground_truth_distribution: struct<wood: int64, table: int64, wooden_pickaxe: int64, stone: int64, stone_sword: int64>> Traceback: Traceback (most recent call last): File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 231, in compute_first_rows_from_streaming_response iterable_dataset = iterable_dataset._resolve_features() File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 3335, in _resolve_features features = _infer_features_from_batch(self.with_format(None)._head()) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2096, in _head return next(iter(self.iter(batch_size=n))) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2296, in iter for key, example in iterator: File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1856, in __iter__ for key, pa_table in self._iter_arrow(): File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1878, in _iter_arrow yield from self.ex_iterable._iter_arrow() File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 504, in _iter_arrow yield new_key, pa.Table.from_batches(chunks_buffer) File "pyarrow/table.pxi", line 4116, in pyarrow.lib.Table.from_batches File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status pyarrow.lib.ArrowInvalid: Schema at index 1 was different: folder_path: string n_components: int64 batch_size: int64 max_episodes: int64 num_samples: int64 vs parameters: struct<obs: string, samples: int64, path: string, obj: string, shuffle: bool> plans: list<item: struct<target: string, actions: list<item: string>, skill: string>> stats: struct<min_episode_length: int64, avg_episode_length: double, max_episode_length: int64, ground_truth_distribution: struct<wood: int64, table: int64, wooden_pickaxe: int64, stone: int64, stone_sword: int64>>
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
Dataset Card for Craftax Expert Skill Data
This dataset consists of expert demonstration trajectories from the Crafter (Craftax) environment. Each trajectory includes ground-truth skill segmentation annotations, enabling research into action segmentation, skill discovery, imitation learning, and reinforcement learning with temporally-structured data.
Dataset Details
Dataset Description
The Craftax Skill Segmentation Dataset contains gameplay trajectories from an expert policy in the Crafter environment. Each trajectory is labeled with ground-truth skill boundaries and skill identifiers, allowing users to train and evaluate models for temporal segmentation, behavior cloning, and skill-based representation learning.
- Curated by: dami2106
- License: Apache 2.0
- Language(s) (NLP): Not applicable (code/visual)
Dataset Sources
- Repository: https://huggingface.co/datasets/dami2106/Craftax-Skill-Data
- Environment: CraftAX
Uses
Direct Use
This dataset is designed for use in:
- Training models to segment long-horizon behaviors into reusable skills.
- Evaluating action segmentation or hierarchical RL approaches.
- Studying object-centric or spatially grounded RL methods.
- Pretraining representations from visual expert data.
Out-of-Scope Use
- Language-based tasks (no natural language data is included).
- Real-world robotics (simulation-only data).
- Tasks requiring raw image pixels if they are not included in your setup.
Dataset Structure
Each data file includes:
- A sequence of states (top-down pixels, symbolic, pixel-obs, or PCA features).
- Corresponding actions (no-op action appended for final state).
- Skill labels marking where each skill begins and ends.
Example structure: Example structure:
{
"task_id": "name of the task",
"pixel_obs": [...], // Raw visual observations (e.g., RGB frames)
"top_down_obs": [...], // Environment state from a top-down view
"pca_features": [...], // Compressed feature vectors
"actions": [...], // Agent actions
"groundTruth": [...], // Ground-truth skill segmentation labels
"mapping": // Mapping metadata for skill ID -> groundTruth
}
Dataset Creation
Curation Rationale
This dataset was created to support research in skill discovery and temporal abstraction in visual reinforcement learning environments. The Craftax environment provides meaningful high-level tasks and object interactions, making it a suitable benchmark.
Source Data
Data Collection and Processing
- The expert trajectories were generated using a scripted or trained expert policy in the Craftax environment.
- Skill labels were annotated using environment signals (e.g., task success or inventory changes) and manual rules.
Who are the source data producers?
The data was generated programmatically in the Craftax simulation environment by an expert agent.
Annotations
Annotation process
Skill annotations were added using heuristics based on inventory changes, environment triggers, and task events. They were verified for consistency across episodes.
Who are the annotators?
Automated heuristics with manual inspection during development.
Bias, Risks, and Limitations
- The dataset is based on simulation, so real-world transferability may be limited.
- Skill labels are heuristically defined, which may not reflect a true underlying skill taxonomy.
- The expert behavior might be biased toward one specific strategy or task order.
Recommendations
Researchers should consider validating learned skills on diverse evaluation tasks. Skill segmentation boundaries are approximations and might not generalize well to different agents or environments.
- Downloads last month
- 1,822