number
int64 2
7.91k
| title
stringlengths 1
290
| body
stringlengths 0
228k
| state
stringclasses 2
values | created_at
timestamp[s]date 2020-04-14 18:18:51
2025-12-16 10:45:02
| updated_at
timestamp[s]date 2020-04-29 09:23:05
2025-12-16 19:34:46
| closed_at
timestamp[s]date 2020-04-29 09:23:05
2025-12-16 14:20:48
⌀ | url
stringlengths 48
51
| author
stringlengths 3
26
⌀ | comments_count
int64 0
70
| labels
listlengths 0
4
|
|---|---|---|---|---|---|---|---|---|---|---|
7,905
|
Unbounded network usage when opening Data Studio
|
### Describe the bug
Opening the Data Studio tab on a dataset page triggers continuous and unbounded network traffic. This issue occurs across multiple browsers and continues even without user interaction.
### Steps to reproduce the bug
https://huggingface.co/datasets/slone/nllb-200-10M-sample/viewer
### Expected behavior
Data Studio should load a limited, finite amount of data and stop further network activity unless explicitly requested by the user.
### Environment info
- OS: Windows 10
- Browsers: Chrome, Firefox, Edge
- Device: Desktop
- Network: Standard broadband connection
|
OPEN
| 2025-12-16T10:45:02
| 2025-12-16T10:45:02
| null |
https://github.com/huggingface/datasets/issues/7905
|
alizaredornica-sys
| 0
|
[] |
7,904
|
Request: Review pending neuroimaging PRs (#7886 BIDS loader, #7887 lazy loading)
|
## Summary
I'm building production neuroimaging pipelines that depend on `datasets` and would benefit greatly from two pending PRs being reviewed/merged.
## Pending PRs
| PR | Description | Status | Open Since |
|----|-------------|--------|------------|
| [#7886](https://github.com/huggingface/datasets/pull/7886) | BIDS dataset loader | Open | Nov 29 |
| [#7887](https://github.com/huggingface/datasets/pull/7887) | Lazy loading for NIfTI | Open | Nov 29 |
## Use Case
The neuroimaging community uses the BIDS (Brain Imaging Data Structure) standard for organizing MRI/fMRI data. These PRs would enable:
1. **#7886**: `load_dataset('bids', data_dir='/path/to/bids')` - Load local BIDS directories directly
2. **#7887**: Memory-efficient NIfTI handling (single 4D fMRI file can be 1-2GB)
## Current Workaround
Without these, users must either:
- Upload to Hub first, then consume (works but slow iteration)
- Hand-roll BIDS parsing (duplicates effort)
## Request
Could a maintainer review these PRs? Happy to address any feedback. The BIDS loader has tests passing and was end-to-end tested with real OpenNeuro data.
Thank you for the great work on `Nifti()` support - these PRs build on that foundation.
## Related
- Contributes to #7804 (Support scientific data formats)
- Built on @TobiasPitters's Nifti feature work
|
OPEN
| 2025-12-14T20:34:31
| 2025-12-15T11:25:29
| null |
https://github.com/huggingface/datasets/issues/7904
|
The-Obstacle-Is-The-Way
| 1
|
[] |
7,902
|
The child process retrieves the dataset directly from the main process instead of executing `memory_mapped_arrow_table_from_file`.
|
### Feature request
The child process retrieves the dataset directly from the main process instead of executing `memory_mapped_arrow_table_from_file`.
### Motivation
Because my local disk space is insufficient, I can only store a dataset on a remote Ceph server and process it using datasets.
I used the data-juicer[https://github.com/datajuicer/data-juicer] framework as an outer layer which uses datasets, but it doesn't support streaming datasets. I then encountered a problem: for each load, map, and filter operation, I had to wait for a large number of child processes to execute `memory_mapped_arrow_table_from_file`. Since the actual file was on the remote Ceph server, this operation was limited by network I/O.
I don't know if it's a problem with my usage or if this is how datasets are currently designed.However, I think that if the instances obtained after datasets.load_datasets are directly passed to the child process instead of re-executing `memory_mapped_arrow_table_from_file`, it might solve my problem.Or datasets already support this capability, but I just didn't know it?
### Your contribution
。。。
|
OPEN
| 2025-12-12T12:37:44
| 2025-12-15T11:48:16
| null |
https://github.com/huggingface/datasets/issues/7902
|
HQF2017
| 1
|
[
"enhancement"
] |
7,901
|
ShuffledDataSourcesArrowExamplesIterable cannot properly resume from checkpoint
|
### Describe the bug
ShuffledDataSourcesArrowExamplesIterable cannot properly resume from checkpoint
### Steps to reproduce the bug
1. The reproducible code is as follows:
```
from datasets import Dataset, concatenate_datasets, interleave_datasets
ds = Dataset.from_dict({"a": range(12)}).to_iterable_dataset(num_shards=1)
ds = ds.shuffle(seed=42)
for idx, example in enumerate(ds):
print(example)
if idx == 2: #The checkpoint can be loaded correctly only when idx <= 1.
state_dict = ds.state_dict()
print("checkpoint")
break
print("state_dict: ",state_dict)
ds.load_state_dict(state_dict)
print(f"restart from checkpoint")
for example in ds:
print(example)
```
2. The error message is as follows:
```
{'a': 0}
{'a': 7}
{'a': 6}
checkpoint
state_dict: {'examples_iterable': {'examples_iterable': {'examples_iterable': {'shard_idx': 1, 'shard_example_idx': 0, 'type': 'ShuffledDataSourcesArrowExamplesIterable'}, 'previous_state': {'shard_idx': 0, 'shard_example_idx': 0, 'type': 'ShuffledDataSourcesArrowExamplesIterable'}, 'batch_idx': 12, 'num_chunks_since_previous_state': 12, 'cropped_chunk_length': 0, 'type': 'RebatchedArrowExamplesIterable'}, 'previous_state': {'examples_iterable': {'shard_idx': 1, 'shard_example_idx': 0, 'type': 'ShuffledDataSourcesArrowExamplesIterable'}, 'previous_state': {'shard_idx': 0, 'shard_example_idx': 0, 'type': 'ShuffledDataSourcesArrowExamplesIterable'}, 'batch_idx': 12, 'num_chunks_since_previous_state': 12, 'cropped_chunk_length': 0, 'type': 'RebatchedArrowExamplesIterable'}, 'batch_idx': 3, 'num_chunks_since_previous_state': 2, 'cropped_chunk_length': 0, 'type': 'RebatchedArrowExamplesIterable'}, 'epoch': 0}
restart from checkpoint
Loading a state dict of a shuffle buffer of a dataset without the buffer content.The shuffle buffer will be refilled before starting to yield new examples.
```
### Expected behavior
I want a correct resume from any checkpoint, but currently the checkpoint can only be loaded correctly when idx <= 1.
### Environment info
datasets Version: 4.4.1
@lhoestq
|
OPEN
| 2025-12-12T06:57:32
| 2025-12-16T19:34:46
| null |
https://github.com/huggingface/datasets/issues/7901
|
howitry
| 3
|
[] |
7,900
|
`Permission denied` when sharing cache between users
|
### Describe the bug
We want to use `datasets` and `transformers` on a shared machine. Right now, each user has a separate HF_HOME in their home directory. To reduce duplicates of the datasets, we want to share that cache. While experimenting, we are running into `Permission denied` errors.
It looks like this was supported in the past (see #6589)?
Is there a correct way to share caches across users?
### Steps to reproduce the bug
1. Create a directory `/models/hf_hub_shared_experiment` with read/write permissions for two different users
2. For each user run the script below
```python
import os
os.environ["HF_HOME"] = "/models/hf_hub_shared_experiment"
os.environ["HF_DATASETS_CACHE"] = "/models/hf_hub_shared_experiment/data"
import datasets
import transformers
DATASET = "tatsu-lab/alpaca"
MODEL = "meta-llama/Llama-3.2-1B-Instruct"
model = transformers.AutoModelForCausalLM.from_pretrained(MODEL)
tokenizer = transformers.AutoTokenizer.from_pretrained(MODEL)
dataset = datasets.load_dataset(DATASET)
```
The first user is able to download and use the model and dataset. The second user gets these errors:
```
$ python ./experiment_with_shared.py
Could not cache non-existence of file. Will ignore error and continue. Error: [Errno 13] Permission denied: '/models/hf_hub_shared_experiment/hub/models--meta-llama--Llama-3.2-1B-Instruct/.no_exist/9213176726f574b556790deb65791e0c5aa438b6/custom_generate/generate.py'
Could not cache non-existence of file. Will ignore error and continue. Error: [Errno 13] Permission denied: '/models/hf_hub_shared_experiment/hub/datasets--tatsu-lab--alpaca/.no_exist/dce01c9b08f87459cf36a430d809084718273017/alpaca.py'
Could not cache non-existence of file. Will ignore error and continue. Error: [Errno 13] Permission denied: '/models/hf_hub_shared_experiment/hub/datasets--tatsu-lab--alpaca/.no_exist/dce01c9b08f87459cf36a430d809084718273017/.huggingface.yaml'
Could not cache non-existence of file. Will ignore error and continue. Error: [Errno 13] Permission denied: '/models/hf_hub_shared_experiment/hub/datasets--tatsu-lab--alpaca/.no_exist/dce01c9b08f87459cf36a430d809084718273017/dataset_infos.json'
Traceback (most recent call last):
File "/home/user2/.venv/experiment_with_shared.py", line 17, in <module>
dataset = datasets.load_dataset(DATASET)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user2/.venv/lib/python3.12/site-packages/datasets/load.py", line 1397, in load_dataset
builder_instance = load_dataset_builder(
^^^^^^^^^^^^^^^^^^^^^
File "/home/user2/.venv/lib/python3.12/site-packages/datasets/load.py", line 1171, in load_dataset_builder
builder_instance: DatasetBuilder = builder_cls(
^^^^^^^^^^^^
File "/home/user2/.venv/lib/python3.12/site-packages/datasets/builder.py", line 390, in __init__
with FileLock(lock_path):
File "/home/user2/.venv/lib/python3.12/site-packages/filelock/_api.py", line 377, in __enter__
self.acquire()
File "/home/user2/.venv/lib/python3.12/site-packages/filelock/_api.py", line 333, in acquire
self._acquire()
File "/home/user2/.venv/lib/python3.12/site-packages/filelock/_unix.py", line 45, in _acquire
fd = os.open(self.lock_file, open_flags, self._context.mode)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
PermissionError: [Errno 13] Permission denied: '/models/hf_hub_shared_experiment/data/_models_hf_hub_shared_experiment_data_tatsu-lab___alpaca_default_0.0.0_dce01c9b08f87459cf36a430d809084718273017.lock'
```
### Expected behavior
The second user should be able to read the shared cache files.
### Environment info
$ datasets-cli env
- `datasets` version: 4.4.1
- Platform: Linux-6.8.0-88-generic-x86_64-with-glibc2.39
- Python version: 3.12.3
- `huggingface_hub` version: 0.36.0
- PyArrow version: 22.0.0
- Pandas version: 2.3.3
- `fsspec` version: 2025.10.0
|
OPEN
| 2025-12-09T16:41:47
| 2025-12-16T15:39:06
| null |
https://github.com/huggingface/datasets/issues/7900
|
qthequartermasterman
| 2
|
[] |
7,894
|
embed_table_storage crashes (SIGKILL) on sharded datasets with Sequence() nested types
|
## Summary
`embed_table_storage` crashes with SIGKILL (exit code 137) when processing sharded datasets containing `Sequence()` nested types like `Sequence(Nifti())`. Likely affects `Sequence(Image())` and `Sequence(Audio())` as well.
The crash occurs at the C++ level with no Python traceback.
### Related Issues
- #7852 - Problems with NifTI (closed, but related embedding issues)
- #6790 - PyArrow 'Memory mapping file failed' (potentially related)
- #7893 - OOM issue (separate bug, but discovered together)
### Context
Discovered while uploading the [Aphasia Recovery Cohort (ARC)](https://openneuro.org/datasets/ds004884) neuroimaging dataset to HuggingFace Hub. Even after fixing the OOM issue (#7893), this crash blocked uploads.
Working implementation with workaround: [arc-aphasia-bids](https://github.com/The-Obstacle-Is-The-Way/arc-aphasia-bids)
## Reproduction
```python
from datasets import Dataset, Features, Sequence, Value
from datasets.features import Nifti
from datasets.table import embed_table_storage
features = Features({
"id": Value("string"),
"images": Sequence(Nifti()),
})
ds = Dataset.from_dict({
"id": ["a", "b"],
"images": [["/path/to/file.nii.gz"], []],
}).cast(features)
# This works fine:
table = ds._data.table.combine_chunks()
embedded = embed_table_storage(table) # OK
# This crashes with SIGKILL:
shard = ds.shard(num_shards=2, index=0)
shard_table = shard._data.table.combine_chunks()
embedded = embed_table_storage(shard_table) # CRASH - no Python traceback
```
## Key Observations
| Scenario | Result |
|----------|--------|
| Single `Nifti()` column | Works |
| `Sequence(Nifti())` on full dataset | Works |
| `Sequence(Nifti())` after `ds.shard()` | **CRASHES** |
| `Sequence(Nifti())` after `ds.select([i])` | **CRASHES** |
| Crash with empty Sequence `[]` | **YES** - not file-size related |
## Workaround
Convert shard to pandas and recreate the Dataset to break internal Arrow references:
```python
shard = ds.shard(num_shards=num_shards, index=i, contiguous=True)
# CRITICAL: Pandas round-trip breaks problematic references
shard_df = shard.to_pandas()
fresh_shard = Dataset.from_pandas(shard_df, preserve_index=False)
fresh_shard = fresh_shard.cast(ds.features)
# Now embedding works
table = fresh_shard._data.table.combine_chunks()
embedded = embed_table_storage(table) # OK!
```
## Disproven Hypotheses
| Hypothesis | Test | Result |
|------------|------|--------|
| PyArrow 2GB binary limit | Monkey-patched `Nifti.pa_type` to `pa.large_binary()` | Still crashed |
| Memory fragmentation | Called `table.combine_chunks()` | Still crashed |
| File size issue | Tested with tiny NIfTI files | Still crashed |
## Root Cause Hypothesis
When `ds.shard()` or `ds.select()` creates a subset, the resulting Arrow table retains internal references/views to the parent table. When `embed_table_storage` processes nested struct types like `Sequence(Nifti())`, these references cause a crash in the C++ layer.
The pandas round-trip forces a full data copy, breaking these problematic references.
## Environment
- datasets version: main branch (post-0.22.0)
- Platform: macOS 14.x ARM64 (may be platform-specific)
- Python: 3.13
- PyArrow: 18.1.0
## Notes
This may ultimately be a PyArrow issue surfacing through datasets. Happy to help debug further if maintainers can point to where to look in the embedding logic.
|
OPEN
| 2025-12-03T04:20:06
| 2025-12-06T13:10:34
| null |
https://github.com/huggingface/datasets/issues/7894
|
The-Obstacle-Is-The-Way
| 3
|
[] |
7,893
|
push_to_hub OOM: _push_parquet_shards_to_hub accumulates all shard bytes in memory
|
## Summary
Large dataset uploads crash or hang due to memory exhaustion. This appears to be the root cause of several long-standing issues.
### Related Issues
This is the root cause of:
- #5990 - Pushing a large dataset on the hub consistently hangs (46 comments, open since 2023)
- #7400 - 504 Gateway Timeout when uploading large dataset
- #6686 - Question: Is there any way for uploading a large image dataset?
### Context
Discovered while uploading the [Aphasia Recovery Cohort (ARC)](https://openneuro.org/datasets/ds004884) neuroimaging dataset (~270GB, 902 sessions) to HuggingFace Hub using the `Nifti()` feature.
Working implementation with workaround: [arc-aphasia-bids](https://github.com/The-Obstacle-Is-The-Way/arc-aphasia-bids)
## Root Cause
In `_push_parquet_shards_to_hub` (arrow_dataset.py), the `additions` list accumulates every `CommitOperationAdd` with full Parquet bytes in memory:
```python
additions = []
for shard in shards:
parquet_content = shard.to_parquet_bytes() # ~300 MB per shard
shard_addition = CommitOperationAdd(path_or_fileobj=parquet_content)
api.preupload_lfs_files(additions=[shard_addition])
additions.append(shard_addition) # THE BUG: bytes stay in memory forever
```
For a 902-shard dataset: **902 × 300 MB = ~270 GB RAM requested → OOM/hang**.
The bytes are held until the final `create_commit()` call, preventing garbage collection.
## Reproduction
```python
from datasets import load_dataset
# Any large dataset with embedded files (Image, Audio, Nifti, etc.)
ds = load_dataset("imagefolder", data_dir="path/to/large/dataset")
ds.push_to_hub("repo-id", num_shards=500) # Watch memory grow until crash
```
## Workaround
Process one shard at a time, upload via `HfApi.upload_file(path=...)`, delete before next iteration:
```python
from huggingface_hub import HfApi
import pyarrow.parquet as pq
api = HfApi()
for i in range(num_shards):
shard = ds.shard(num_shards=num_shards, index=i, contiguous=True)
# Write to disk, not memory
shard.to_parquet(local_path)
# Upload from file path (streams from disk)
api.upload_file(
path_or_fileobj=str(local_path),
path_in_repo=f"data/train-{i:05d}-of-{num_shards:05d}.parquet",
repo_id=repo_id,
repo_type="dataset",
)
# Clean up before next iteration
local_path.unlink()
del shard
```
Memory usage stays constant (~1-2 GB) instead of growing linearly.
## Suggested Fix
After `preupload_lfs_files` succeeds for each shard, release the bytes:
1. Clear `path_or_fileobj` from the `CommitOperationAdd` after preupload
2. Or write to temp file and pass file path instead of bytes
3. Or commit incrementally instead of batching all additions
## Environment
- datasets version: main branch (post-0.22.0)
- Platform: macOS 14.x ARM64
- Python: 3.13
- PyArrow: 18.1.0
- Dataset: 902 shards, ~270 GB total embedded NIfTI files
|
CLOSED
| 2025-12-03T04:19:34
| 2025-12-05T22:45:59
| 2025-12-05T22:44:16
|
https://github.com/huggingface/datasets/issues/7893
|
The-Obstacle-Is-The-Way
| 2
|
[] |
7,883
|
Data.to_csv() cannot be recognized by pylance
|
### Describe the bug
Hi, everyone ! I am a beginner with datasets.
I am testing reading multiple CSV files from a zip archive. The result of reading the dataset shows success, and it can ultimately be correctly saved to CSV.
Intermediate results:
```
Generating train split: 62973 examples [00:00, 175939.01 examples/s]
DatasetDict({
train: Dataset({
features: ['交易时间\t', '收支方向\t', '业务(产品)种类\t', '交易金额\t', '币种\t', '时点余额\t', '对手方名称\t', '对方机构名称\t', ' 对方钱包ID/账号\t', '交易对手名称\t', '交易对手编号\t', '交易流水号\t', '摘要\t', '附言\t', '备注\t', '用途\t', '客户流水号\t'],
num_rows: 62973
})
})
```
However, Pylance gives me the following error:
```
Cannot access attribute "to_csv" for class "DatasetDict"
Attribute "to_csv" is unknownPylance[reportAttributeAccessIssue](https://github.com/microsoft/pylance-release/blob/main/docs/diagnostics/reportAttributeAccessIssue.md)```
Cannot access attribute "to_csv" for class "IterableDatasetDict"
Attribute "to_csv" is unknownPylance[reportAttributeAccessIssue](https://github.com/microsoft/pylance-release/blob/main/docs/diagnostics/reportAttributeAccessIssue.md)
(method) to_csv: Unknown | ((path_or_buf: datasets.utils.typing.PathLike | BinaryIO, batch_size: int | None = None, num_proc: int | None = None, storage_options: dict[Unknown, Unknown] | None = None, **to_csv_kwargs: Unknown) -> int) | ((path_or_buf: datasets.utils.typing.PathLike | BinaryIO, batch_size: int | None = None, storage_options: dict[Unknown, Unknown] | None = None, **to_csv_kwargs: Unknown) -> int)
```
I ignored the error and continued executing to get the correct result:
```
Dataset({
features: ['交易时间\t', '收支方向\t', '业务(产品)种类\t', '交易金额\t', '币种\t', '时点余额\t', '对手方名称\t', '对方机构名称\t', '对方 钱包ID/账号\t', '交易对手名称\t', '交易对手编号\t', '交易流水号\t', '摘要\t', '附言\t', '备注\t', '用途\t', '客户流水号\t'],
num_rows: 62973
})
```
Since the data volume is small, I manually merged the CSV files, and the final result is consistent with what the program saved.
looks like :
<img width="1264" height="150" alt="Image" src="https://github.com/user-attachments/assets/743540d7-ad8c-4531-ae7e-de71a5243a32" />
### Steps to reproduce the bug
this is my code.
```
from datasets import load_dataset
def main():
url = "data/test.zip"
data_files = {"train": url}
dataset = load_dataset("csv", data_files=data_files,split="train", encoding="gbk", skiprows=2)
# print(dataset)
dataset.to_csv("data/test.csv")
if __name__ == "__main__":
main()
```
### Expected behavior
I want to know why this happens. Is there something wrong with my code?
### Environment info
OS: Windows 11 **upgrade from** OS: Windows_NT x64 10.0.22631
Editor:
VS Code Version: 1.106.2 (user setup)
"datasets" version = "4.4.1"
|
CLOSED
| 2025-11-26T16:16:56
| 2025-12-08T12:06:58
| 2025-12-08T12:06:58
|
https://github.com/huggingface/datasets/issues/7883
|
xi4ngxin
| 0
|
[] |
7,882
|
Inconsistent loading of LFS-hosted files in epfml/FineWeb-HQ dataset
|
### Describe the bug
Some files in the `epfml/FineWeb-HQ` dataset fail to load via the Hugging Face `datasets` library.
- xet-hosted files load fine
- LFS-hosted files sometimes fail
Example:
- Fails: https://huggingface.co/datasets/epfml/FineWeb-HQ/blob/main/data/CC-MAIN-2024-26/000_00003.parquet
- Works: https://huggingface.co/datasets/epfml/FineWeb-HQ/blob/main/data/CC-MAIN-2024-42/000_00027.parquet
Discussion: https://huggingface.co/datasets/epfml/FineWeb-HQ/discussions/2
### Steps to reproduce the bug
```python
from datasets import load_dataset
ds = load_dataset(
"epfml/FineWeb-HQ",
data_files="data/CC-MAIN-2024-26/000_00003.parquet",
)
```
Error message:
```
HfHubHTTPError: 403 Forbidden: None.
Cannot access content at: https://cdn-lfs-us-1.hf.co/repos/...
Make sure your token has the correct permissions.
...
<Error><Code>AccessDenied</Code><Message>Access Denied</Message></Error>
```
### Expected behavior
It should load the dataset for all files.
### Environment info
- python 3.10
- datasets 4.4.1
|
OPEN
| 2025-11-26T14:06:02
| 2025-12-15T18:20:50
| null |
https://github.com/huggingface/datasets/issues/7882
|
Oligou
| 1
|
[] |
7,880
|
Spurious label column created when audiofolder/imagefolder directories match split names
|
## Describe the bug
When using `audiofolder` or `imagefolder` with directories for **splits** (train/test) rather than class labels, a spurious `label` column is incorrectly created.
**Example:** https://huggingface.co/datasets/datasets-examples/doc-audio-4
```
from datasets import load_dataset
ds = load_dataset("datasets-examples/doc-audio-4")
print(ds["train"].features)
```
Shows 'label' column with ClassLabel(names=['test', 'train']) - incorrect!## Root cause
In `folder_based_builder.py`, the `labels` set is accumulated across ALL splits (line 77). When directories are `train/` and `test/`:
- `labels = {"train", "test"}` → `len(labels) > 1` → `add_labels = True`
- Spurious label column is created with split names as class labels
## Expected behavior
No `label` column should be added when directory names match split names.
## Proposed fix
Skip label inference when inferred labels match split names.
cc @lhoestq
|
OPEN
| 2025-11-26T13:36:24
| 2025-11-26T13:36:24
| null |
https://github.com/huggingface/datasets/issues/7880
|
neha222222
| 0
|
[] |
7,879
|
python core dump when downloading dataset
|
### Describe the bug
When downloading a dataset in streamed mode and exiting the program before the download completes, the python program core dumps when exiting:
```
terminate called without an active exception
Aborted (core dumped)
```
Tested with python 3.12.3, python 3.9.21
### Steps to reproduce the bug
Create python venv:
```bash
python -m venv venv
./venv/bin/activate
pip install datasets==4.4.1
```
Execute the following program:
```
from datasets import load_dataset
ds = load_dataset("HuggingFaceFW/fineweb-2", 'hrv_Latn', split="test", streaming=True)
for sample in ds:
break
```
### Expected behavior
Clean program exit
### Environment info
described above
**note**: the example works correctly when using ```datasets==3.1.0```
|
OPEN
| 2025-11-24T06:22:53
| 2025-11-25T20:45:55
| null |
https://github.com/huggingface/datasets/issues/7879
|
hansewetz
| 10
|
[] |
7,877
|
work around `tempfile` silently ignoring `TMPDIR` if the dir doesn't exist
|
This should help a lot of users running into `No space left on device` while using `datasets`. Normally the issue is is that `/tmp` is too small and the user needs to use another path, which they would normally set as `export TMPDIR=/some/big/storage`
However, the `tempfile` facility that `datasets` and `pyarrow` use is somewhat broken. If the path doesn't exist it'd ignore it and fall back to using `/tmp`. Watch this:
```
$ export TMPDIR='/tmp/username'
$ python -c "\
import os
import tempfile
print(os.environ['TMPDIR'])
print(tempfile.gettempdir())"
/tmp/username
/tmp
```
Now let's ensure the path exists:
```
$ export TMPDIR='/tmp/username'
$ mkdir -p $TMPDIR
$ python -c "\
import os
import tempfile
print(os.environ['TMPDIR'])
print(tempfile.gettempdir())"
/tmp/username
/tmp/username
```
So I recommend `datasets` do either of the 2:
1. assert if `$TMPDIR` dir doesn't exist, telling the user to create it
2. auto-create it
The reason for (1) is that I don't know why `tempdir` doesn't auto-create the dir - perhaps some security implication? I will let you guys make the decision, but the key is not to let things silently fall through and the user puzzling why no matter what they do they can't break past `No space left on device` while using `datasets`
Thank you.
I found this via https://stackoverflow.com/questions/37229398/python-tempfile-gettempdir-does-not-respect-tmpdir while trying to help a colleague to solve this exact issue.
|
CLOSED
| 2025-11-21T19:51:48
| 2025-12-16T14:20:48
| 2025-12-16T14:20:48
|
https://github.com/huggingface/datasets/issues/7877
|
stas00
| 1
|
[] |
7,872
|
IterableDataset does not use features information in to_pandas
|
### Describe the bug
`IterableDataset` created from generator with explicit `features=` parameter seems to ignore provided features description for certain operations, e.g. `.to_pandas(...)` when data coming from the generator has missing values.
### Steps to reproduce the bug
```python
import datasets
from datasets import features
def test_to_pandas_works_with_explicit_schema():
common_features = features.Features(
{
"a": features.Value("int64"),
"b": features.List({"c": features.Value("int64")}),
}
)
def row_generator():
data = [{"a": 1, "b": []}, {"a": 1, "b": [{"c": 1}]}]
for row in data:
yield row
d = datasets.IterableDataset.from_generator(row_generator, features=common_features)
for _ in d.to_pandas():
pass
# _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
# .venv/lib/python3.13/site-packages/datasets/iterable_dataset.py:3703: in to_pandas
# table = pa.concat_tables(list(self.with_format("arrow").iter(batch_size=1000)))
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
# .venv/lib/python3.13/site-packages/datasets/iterable_dataset.py:2563: in iter
# for key, pa_table in iterator:
# ^^^^^^^^
# .venv/lib/python3.13/site-packages/datasets/iterable_dataset.py:2078: in _iter_arrow
# for key, pa_table in self.ex_iterable._iter_arrow():
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
# .venv/lib/python3.13/site-packages/datasets/iterable_dataset.py:599: in _iter_arrow
# yield new_key, pa.Table.from_batches(chunks_buffer)
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
# pyarrow/table.pxi:5039: in pyarrow.lib.Table.from_batches
# ???
# pyarrow/error.pxi:155: in pyarrow.lib.pyarrow_internal_check_status
# ???
# _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
# > ???
# E pyarrow.lib.ArrowInvalid: Schema at index 1 was different:
# E a: int64
# E b: list<item: null>
# E vs
# E a: int64
# E b: list<item: struct<c: int64>>
# pyarrow/error.pxi:92: ArrowInvalid
```
### Expected behavior
arrow operations use schema provided through `features=` and not the one inferred from the data
### Environment info
- datasets version: 4.4.1
- Platform: macOS-15.7.1-arm64-arm-64bit-Mach-O
- Python version: 3.13.1
- huggingface_hub version: 1.1.4
- PyArrow version: 22.0.0
- Pandas version: 2.3.3
- fsspec version: 2025.10.0
|
OPEN
| 2025-11-19T17:12:59
| 2025-11-19T18:52:14
| null |
https://github.com/huggingface/datasets/issues/7872
|
bonext
| 2
|
[] |
7,871
|
Reqwest Error: HTTP status client error (429 Too Many Requests)
|
### Describe the bug
full error message:
```
Traceback (most recent call last):
File "/home/yanan/miniconda3/bin/hf", line 7, in <module>
sys.exit(main())
~~~~^^
File "/home/yanan/miniconda3/lib/python3.13/site-packages/huggingface_hub/cli/hf.py", line 56, in main
app()
~~~^^
File "/home/yanan/miniconda3/lib/python3.13/site-packages/typer/main.py", line 327, in __call__
raise e
File "/home/yanan/miniconda3/lib/python3.13/site-packages/typer/main.py", line 310, in __call__
return get_command(self)(*args, **kwargs)
~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^
File "/home/yanan/miniconda3/lib/python3.13/site-packages/click/core.py", line 1161, in __call__
return self.main(*args, **kwargs)
~~~~~~~~~^^^^^^^^^^^^^^^^^
File "/home/yanan/miniconda3/lib/python3.13/site-packages/typer/core.py", line 803, in main
return _main(
self,
...<6 lines>...
**extra,
)
File "/home/yanan/miniconda3/lib/python3.13/site-packages/typer/core.py", line 192, in _main
rv = self.invoke(ctx)
File "/home/yanan/miniconda3/lib/python3.13/site-packages/click/core.py", line 1697, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^
File "/home/yanan/miniconda3/lib/python3.13/site-packages/click/core.py", line 1443, in invoke
return ctx.invoke(self.callback, **ctx.params)
~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/yanan/miniconda3/lib/python3.13/site-packages/click/core.py", line 788, in invoke
return __callback(*args, **kwargs)
File "/home/yanan/miniconda3/lib/python3.13/site-packages/typer/main.py", line 691, in wrapper
return callback(**use_params)
File "/home/yanan/miniconda3/lib/python3.13/site-packages/huggingface_hub/cli/download.py", line 188, in download
_print_result(run_download())
~~~~~~~~~~~~^^
File "/home/yanan/miniconda3/lib/python3.13/site-packages/huggingface_hub/cli/download.py", line 149, in run_download
return snapshot_download(
repo_id=repo_id,
...<10 lines>...
dry_run=dry_run,
)
File "/home/yanan/miniconda3/lib/python3.13/site-packages/huggingface_hub/utils/_validators.py", line 89, in _inner_fn
return fn(*args, **kwargs)
File "/home/yanan/miniconda3/lib/python3.13/site-packages/huggingface_hub/_snapshot_download.py", line 451, in snapshot_download
thread_map(
~~~~~~~~~~^
_inner_hf_hub_download,
^^^^^^^^^^^^^^^^^^^^^^^
...<3 lines>...
tqdm_class=tqdm_class,
^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/home/yanan/miniconda3/lib/python3.13/site-packages/tqdm/contrib/concurrent.py", line 69, in thread_map
return _executor_map(ThreadPoolExecutor, fn, *iterables, **tqdm_kwargs)
File "/home/yanan/miniconda3/lib/python3.13/site-packages/tqdm/contrib/concurrent.py", line 51, in _executor_map
return list(tqdm_class(ex.map(fn, *iterables, chunksize=chunksize), **kwargs))
File "/home/yanan/miniconda3/lib/python3.13/site-packages/tqdm/std.py", line 1181, in __iter__
for obj in iterable:
^^^^^^^^
File "/home/yanan/miniconda3/lib/python3.13/concurrent/futures/_base.py", line 619, in result_iterator
yield _result_or_cancel(fs.pop())
~~~~~~~~~~~~~~~~~^^^^^^^^^^
File "/home/yanan/miniconda3/lib/python3.13/concurrent/futures/_base.py", line 317, in _result_or_cancel
return fut.result(timeout)
~~~~~~~~~~^^^^^^^^^
File "/home/yanan/miniconda3/lib/python3.13/concurrent/futures/_base.py", line 449, in result
return self.__get_result()
~~~~~~~~~~~~~~~~~^^
File "/home/yanan/miniconda3/lib/python3.13/concurrent/futures/_base.py", line 401, in __get_result
raise self._exception
File "/home/yanan/miniconda3/lib/python3.13/concurrent/futures/thread.py", line 59, in run
result = self.fn(*self.args, **self.kwargs)
File "/home/yanan/miniconda3/lib/python3.13/site-packages/huggingface_hub/_snapshot_download.py", line 431, in _inner_hf_hub_download
hf_hub_download( # type: ignore
~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^
repo_id,
^^^^^^^^
...<14 lines>...
dry_run=dry_run,
^^^^^^^^^^^^^^^^
)
^
File "/home/yanan/miniconda3/lib/python3.13/site-packages/huggingface_hub/utils/_validators.py", line 89, in _inner_fn
return fn(*args, **kwargs)
File "/home/yanan/miniconda3/lib/python3.13/site-packages/huggingface_hub/file_download.py", line 986, in hf_hub_download
return _hf_hub_download_to_local_dir(
# Destination
...<16 lines>...
dry_run=dry_run,
)
File "/home/yanan/miniconda3/lib/python3.13/site-packages/huggingface_hub/file_download.py", line 1390, in _hf_hub_download_to_local_dir
_download_to_tmp_and_move(
~~~~~~~~~~~~~~~~~~~~~~~~~^
incomplete_path=paths.incomplete_path(etag),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
...<8 lines>...
tqdm_class=tqdm_class,
^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/home/yanan/miniconda3/lib/python3.13/site-packages/huggingface_hub/file_download.py", line 1791, in _download_to_tmp_and_move
xet_get(
~~~~~~~^
incomplete_path=incomplete_path,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
...<4 lines>...
tqdm_class=tqdm_class,
^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/home/yanan/miniconda3/lib/python3.13/site-packages/huggingface_hub/file_download.py", line 571, in xet_get
download_files(
~~~~~~~~~~~~~~^
xet_download_info,
^^^^^^^^^^^^^^^^^^
...<3 lines>...
progress_updater=[progress_updater],
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
RuntimeError: Data processing error: CAS service error : Reqwest Error: HTTP status client error (429 Too Many Requests), domain: https://cas-server.xethub.hf.co/reconstructions/04b8a4667b84b3b874a6a2f070cec88920f6289e71185d69fa87e3cf29834710
```
### Steps to reproduce the bug
my command
```bash
hf download nvidia/PhysicalAI-Robotics-GR00T-X-Embodiment-Sim --repo-type dataset --include "single_panda_gripper.CoffeePressButton/**" --local-dir /home/yanan/robotics/Isaac-GR00T/gr00t_dataset_official/
```
### Expected behavior
expect the data can be downloaded without any issue
### Environment info
huggingface_hub 1.1.4
|
CLOSED
| 2025-11-19T16:52:24
| 2025-11-30T13:38:32
| 2025-11-30T13:38:32
|
https://github.com/huggingface/datasets/issues/7871
|
yanan1116
| 2
|
[] |
7,870
|
Visualization for Medical Imaging Datasets
|
This is a followup to: https://github.com/huggingface/datasets/pull/7815.
I checked the possibilities to visualize the nifti (and potentially dicom), and here's what I found:
- https://github.com/aces/brainbrowser, AGPL3 license, last commit 3 months ago, latest (github) release from 2017. It's available on jsdelivr: https://www.jsdelivr.com/package/npm/brainbrowser (but that is from 2015!)
- https://github.com/rii-mango/Papaya, custom but BSD-style license that would require datasets to list the conditions in their readme somewhere, last commit June 2024. I looked into this library and it looks mature and good enough for our use case, but just working on it for a short time I wasn't able to get this to work, but am sure we could get this working, would probably require some JS on datasets' end. Available on jsdelivr as well: https://www.jsdelivr.com/package/npm/papaya-viewer. Seems like it's frequently loaded.
- https://github.com/hanayik/niivue, BSD3 license, last commit May 26, 2021. Archived. Doesn't look like an option.
I think the only real option for us Papaya, but there is also the risk that we'll end up with an unmaintained package after a while, since development seems to be slow or even halted.
I think conceptually we would need to figure out how we can build a good solution for visualizing Medical Image data. On shap, we have a separate javascript folder in which we render visualizations, this could be a blueprint but will require a bundler, etc. Alternatively one could go with a naive approach to just write some html code in a python string and load the package via jsdelivr.
@lhoestq thoughts?
|
CLOSED
| 2025-11-19T11:05:39
| 2025-11-21T12:31:19
| 2025-11-21T12:31:19
|
https://github.com/huggingface/datasets/issues/7870
|
CloseChoice
| 1
|
[] |
7,869
|
Why does dataset merge fail when tools have different parameters?
|
Hi, I have a question about SFT (Supervised Fine-tuning) for an agent model.
Suppose I want to fine-tune an agent model that may receive two different tools: tool1 and tool2. These tools have different parameters and types in their schema definitions.
When I try to merge datasets containing different tool definitions, I get the following error:
TypeError: Couldn't cast array of type
struct<refundFee: struct<description: string, type: string>, ... , servicerId: struct<description: string, type: string>>
to
{
'refundFee': {'description': Value(dtype='string'), 'type': Value(dtype='string')},
...
'templateId': {'description': Value(dtype='string'), 'type': Value(dtype='string')}
}
From my understanding, the merge fails because the tools column's nested structure is different across datasets — e.g., one struct contains an extra field servicerId while the other does not. This causes HuggingFace Datasets (and its underlying Apache Arrow schema) to reject the merge.
My question is: why is it designed this way?
Is this strict schema matching a hard requirement of the library?
Is there a recommended way to merge datasets with different tool schemas (different parameters and types)?
For an agent model supporting multiple tools, what's the best practice for preparing/merging training data without losing flexibility?
Any guidance or design rationale would be greatly appreciated. Thanks!
|
OPEN
| 2025-11-18T08:33:04
| 2025-11-30T03:52:07
| null |
https://github.com/huggingface/datasets/issues/7869
|
hitszxs
| 1
|
[] |
7,868
|
Data duplication with `split_dataset_by_node` and `interleaved_dataset`
|
### Describe the bug
Data duplication in different rank, when process a iterabledataset with first `split_dataset_by_node` and then `interleaved_dataset`
### Steps to reproduce the bug
I have provide a minimum scripts
```python
import os
from datasets import interleave_datasets, load_dataset
from datasets.distributed import split_dataset_by_node
path = "/mnt/wwx/datasets/fineweb/data/CC-MAIN-2013-20/"
files = [os.path.join(path, fn) for fn in os.listdir(path)]
dataset = load_dataset("parquet", split="train", data_files=files, streaming=True)
print(f"{dataset.n_shards=}")
dataset_rank0 = split_dataset_by_node(dataset, 0, 4)
dataset_rank1 = split_dataset_by_node(dataset, 1, 4)
dataset_rank0_interleaved = interleave_datasets([dataset_rank0], seed=42, probabilities=[1.0])
dataset_rank1_interleaved = interleave_datasets([dataset_rank1], seed=42, probabilities=[1.0])
print("print the first sample id from all datasets")
print("dataset", next(iter(dataset))['id'])
print("dataset_rank0", next(iter(dataset_rank0))['id'])
print("dataset_rank1", next(iter(dataset_rank1))['id'])
print("dataset_rank0_interleaved", next(iter(dataset_rank0_interleaved))['id'])
print("dataset_rank1_interleaved", next(iter(dataset_rank1_interleaved))['id'])
dataset_rank0_shard = dataset.shard(4, 0)
dataset_rank1_shard = dataset.shard(4, 1)
dataset_rank0_shard_interleaved = interleave_datasets([dataset_rank0_shard], seed=42, probabilities=[1.0])
dataset_rank1_shard_interleaved = interleave_datasets([dataset_rank1_shard], seed=42, probabilities=[1.0])
print("dataset_rank0_shard", next(iter(dataset_rank0_shard))['id'])
print("dataset_rank1_shard", next(iter(dataset_rank1_shard))['id'])
print("dataset_rank0_shard_interleaved", next(iter(dataset_rank0_shard_interleaved))['id'])
print("dataset_rank1_shard_interleaved", next(iter(dataset_rank1_shard_interleaved))['id'])
```
I just use a subfold of C4 with 14 paruets to do the quick run and get
```
dataset.n_shards=14
print the first sample id from all datasets
dataset <urn:uuid:c84a7f00-f3e8-4b67-baa4-df5adaf23bae>
dataset_rank0 <urn:uuid:c84a7f00-f3e8-4b67-baa4-df5adaf23bae>
dataset_rank1 <urn:uuid:6b7da64f-c26e-4086-aef5-4b6f01106223>
dataset_rank0_interleaved <urn:uuid:c84a7f00-f3e8-4b67-baa4-df5adaf23bae>
dataset_rank1_interleaved <urn:uuid:c84a7f00-f3e8-4b67-baa4-df5adaf23bae>
dataset_rank0_shard <urn:uuid:c84a7f00-f3e8-4b67-baa4-df5adaf23bae>
dataset_rank1_shard <urn:uuid:67cf7216-dd05-4f55-a28a-1a1c96989c51>
dataset_rank0_shard_interleaved <urn:uuid:c84a7f00-f3e8-4b67-baa4-df5adaf23bae>
dataset_rank1_shard_interleaved <urn:uuid:67cf7216-dd05-4f55-a28a-1a1c96989c51>
```
### Expected behavior
the first sample of `dataset_rank0_interleaved` and `dataset_rank1_interleaved` should be different, as other `rank0` `rank1` couples.
I have dive into the function and try to find how it work in `split -> interleaved` process.
the `split_dataset_by_node` of iterable dataset does't not change `._ex_iterable` attribute of the dataset. it just set the distributed config in dataset, and the distributed dataset is used in actually `__iter__` call, to handle with shard split or sample skipping.
however, in `interleaved_dataset` of iterable dataset. it copy out all of the `._ex_iterable` of provided datasets, and consist a new `_ex_iterable`, so the missing copy of `distributed config` caused the data duplication in different dp rank.
So I may first ask, is it an unexpected using order of those function, which means:
- always do `split_dataset_by_node` at final rather than in middle way.
- or use `dataset.shard(dp_size, dp_rank)` rather than `split_dataset_by_node` in case similar of mine.
if the using order is permiited, I think it is a bug, and I can do a PR to fix it
(I meet this bug in real training, related issue is https://github.com/ByteDance-Seed/VeOmni/issues/200 if it helps.
### Environment info
datasets 4.4.1
ubuntu 20.04
python 3.11.4
|
OPEN
| 2025-11-17T09:15:24
| 2025-12-15T11:52:32
| null |
https://github.com/huggingface/datasets/issues/7868
|
ValMystletainn
| 3
|
[] |
7,867
|
NonMatchingSplitsSizesError when loading partial dataset files
|
### Describe the bug
When loading only a subset of dataset files while the dataset's README.md contains split metadata, the system throws a NonMatchingSplitsSizesError . This prevents users from loading partial datasets for quick validation in cases of poor network conditions or very large datasets.
### Steps to reproduce the bug
1. Use the Hugging Face `datasets` library to load a dataset with only specific files specified
2. Ensure the dataset repository has split metadata defined in README.md
3. Observe the error when attempting to load a subset of files
```python
# Example code that triggers the error
from datasets import load_dataset
book_corpus_ds = load_dataset(
"SaylorTwift/the_pile_books3_minus_gutenberg",
name="default",
data_files="data/train-00000-of-00213-312fd8d7a3c58a63.parquet",
split="train",
cache_dir="./data"
)
```
### Error Message
```
Traceback (most recent call last):
File "/Users/QingGo/code/llm_learn/src/data/clean_cc_bc.py", line 13, in <module>
book_corpus_ds = load_dataset(
"SaylorTwift/the_pile_books3_minus_gutenberg",
...
File "/Users/QingGo/code/llm_learn/.venv/lib/python3.13/site-packages/datasets/utils/info_utils.py", line 77, in verify_splits
raise NonMatchingSplitsSizesError(str(bad_splits))
datasets.exceptions.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=106199627990.47722, num_examples=192661, shard_lengths=None, dataset_name=None), 'recorded': SplitInfo(name='train', num_bytes=454897326, num_examples=905, shard_lengths=None, dataset_name='the_pile_books3_minus_gutenberg')}]
```
### Expected behavior
When loading partial dataset files, the system should:
1. Skip the `NonMatchingSplitsSizesError` validation, OR
2. Only log a warning message instead of raising an error
### Environment info
- `datasets` version: 4.3.0
- Platform: macOS-15.7.1-arm64-arm-64bit-Mach-O
- Python version: 3.13.2
- `huggingface_hub` version: 0.36.0
- PyArrow version: 22.0.0
- Pandas version: 2.3.3
- `fsspec` version: 2025.9.0
|
OPEN
| 2025-11-13T12:03:23
| 2025-11-16T15:39:23
| null |
https://github.com/huggingface/datasets/issues/7867
|
QingGo
| 2
|
[] |
7,864
|
add_column and add_item erroneously(?) require new_fingerprint parameter
|
### Describe the bug
Contradicting their documentation (which doesn't mention the parameter at all), both Dataset.add_column and Dataset.add_item require a new_fingerprint string. This parameter is passed directly to the dataset constructor, which has the fingerprint parameter listed as optional; is there any reason it shouldn't be optional in these methods as well?
### Steps to reproduce the bug
Reproduction steps:
1. Look at the function signature for add_column: https://github.com/huggingface/datasets/blob/17f40a318a1f8c7d33c2a4dd17934f81d14a7f57/src/datasets/arrow_dataset.py#L6078
2. Repeat for add_item: https://github.com/huggingface/datasets/blob/17f40a318a1f8c7d33c2a4dd17934f81d14a7f57/src/datasets/arrow_dataset.py#L6336
### Expected behavior
add_column and add_item should either set the fingerprint parameter to optional or include it in their docstrings
### Environment info
Not environment-dependent
|
OPEN
| 2025-11-13T02:56:49
| 2025-12-07T14:41:40
| null |
https://github.com/huggingface/datasets/issues/7864
|
echthesia
| 2
|
[] |
7,863
|
Support hosting lance / vortex / iceberg / zarr datasets on huggingface hub
|
### Feature request
Huggingface datasets has great support for large tabular datasets in parquet with large partitions. I would love to see two things in the future:
- equivalent support for `lance`, `vortex`, `iceberg`, `zarr` (in that order) in a way that I can stream them using the datasets library
- more fine-grained control of streaming, so that I can stream at the partition / shard level
### Motivation
I work with very large `lance` datasets on S3 and often require random access for AI/ML applications like multi-node training. I was able to achieve high throughput dataloading on a lance dataset with ~150B rows by building distributed dataloaders that can be scaled both vertically (until i/o and CPU are saturated), and then horizontally (to workaround network bottlenecks).
Using this strategy I was able to achieve 10-20x the throughput of the streaming data loader from the `huggingface/datasets` library.
I realized that these would be great features for huggingface to support natively
### Your contribution
I'm not ready yet to make a PR but open to it with the right pointers!
|
OPEN
| 2025-11-13T00:51:07
| 2025-11-26T14:10:29
| null |
https://github.com/huggingface/datasets/issues/7863
|
pavanramkumar
| 13
|
[
"enhancement"
] |
7,861
|
Performance Issue: save_to_disk() 200-1200% slower due to unconditional flatten_indices()
|
## 🐛 Bug Description
The `save_to_disk()` method unconditionally calls `flatten_indices()` when `_indices` is not None, causing severe performance degradation for datasets processed with filtering, shuffling, or multiprocessed mapping operations.
**Root cause**: This line rebuilds the entire dataset unnecessarily:
```python
dataset = self.flatten_indices() if self._indices is not None else self
```
## 📊 Performance Impact
| Dataset Size | Operation | Save Time | Slowdown |
|-------------|-----------|-----------|----------|
| 100K | Baseline (no indices) | 0.027s | - |
| 100K | Filtered (with indices) | 0.146s | **+431%** |
| 100K | Shuffled (with indices) | 0.332s | **+1107%** |
| 250K | Shuffled (with indices) | 0.849s | **+1202%** |
## 🔄 Reproduction
```python
from datasets import Dataset
import time
# Create dataset
dataset = Dataset.from_dict({'text': [f'sample {i}' for i in range(100000)]})
# Baseline save (no indices)
start = time.time()
dataset.save_to_disk('baseline')
baseline_time = time.time() - start
# Filtered save (creates indices)
filtered = dataset.filter(lambda x: True)
start = time.time()
filtered.save_to_disk('filtered')
filtered_time = time.time() - start
print(f"Baseline: {baseline_time:.3f}s")
print(f"Filtered: {filtered_time:.3f}s")
print(f"Slowdown: {(filtered_time/baseline_time-1)*100:.1f}%")
```
**Expected output**: Filtered dataset is 400-1000% slower than baseline
## 💡 Proposed Solution
Add optional parameter to control flattening:
```python
def save_to_disk(self, dataset_path, flatten_indices=True):
dataset = self.flatten_indices() if (self._indices is not None and flatten_indices) else self
# ... rest of save logic
```
**Benefits**:
- ✅ Immediate performance improvement for users who don't need flattening
- ✅ Backwards compatible (default behavior unchanged)
- ✅ Simple implementation
## 🌍 Environment
- **datasets version**: 2.x
- **Python**: 3.10+
- **OS**: Linux/macOS/Windows
## 📈 Impact
This affects **most ML preprocessing workflows** that filter/shuffle datasets before saving. Performance degradation scales exponentially with dataset size, making it a critical bottleneck for production systems.
## 🔗 Additional Resources
We have comprehensive test scripts demonstrating this across multiple scenarios if needed for further investigation.
|
OPEN
| 2025-11-11T11:05:38
| 2025-11-11T11:05:38
| null |
https://github.com/huggingface/datasets/issues/7861
|
KCKawalkar
| 0
|
[] |
7,856
|
Missing transcript column when loading a local dataset with "audiofolder"
|
### Describe the bug
My local dataset is not properly loaded when using `load_dataset("audiofolder", data_dir="my_dataset")` with a `jsonl` metadata file.
Only the `audio` column is read while the `transcript` column is not.
The last tested `datasets` version where the behavior was still correct is 2.18.0.
### Steps to reproduce the bug
Dataset directory structure:
```
my_dataset/
- data/
- test/
- 54db8760de3cfbff3c8a36a36b4d0f77_00390.0_04583.0.mp3
- 54db8760de3cfbff3c8a36a36b4d0f77_04583.0_05730.0.mp3
- ...
- metadata.jsonl
```
`metadata.jsonl` file content:
```
{"file_name": "data/test/54db8760de3cfbff3c8a36a36b4d0f77_00390.0_04583.0.mp3", "transcript": "Ata tudoù penaos e tro ar bed ?"}
{"file_name": "data/test/54db8760de3cfbff3c8a36a36b4d0f77_04583.0_05730.0.mp3", "transcript": "Ur gwir blijadur eo adkavout ac'hanoc'h hiziv."}
...
```
```python3
my_dataset = load_dataset("audiofolder", data_dir="my_dataset")
print(my_dataset)
'''
DatasetDict({
test: Dataset({
features: ['audio'],
num_rows: 347
})
})
'''
print(my_dataset['test'][0])
'''
{'audio': <datasets.features._torchcodec.AudioDecoder object at 0x75ffcd172510>}
'''
```
### Expected behavior
Being able to access the `transcript` column in the loaded dataset.
### Environment info
- `datasets` version: 4.4.1
- Platform: Linux-6.5.0-45-generic-x86_64-with-glibc2.39
- Python version: 3.13.9
- `huggingface_hub` version: 1.1.2
- PyArrow version: 22.0.0
- Pandas version: 2.3.3
- `fsspec` version: 2025.10.0
Note: same issue with `datasets` v3.6.0
|
CLOSED
| 2025-11-08T16:27:58
| 2025-11-09T12:13:38
| 2025-11-09T12:13:38
|
https://github.com/huggingface/datasets/issues/7856
|
gweltou
| 2
|
[] |
7,852
|
Problems with NifTI
|
### Describe the bug
There are currently 2 problems with the new NifTI feature:
1. dealing with zipped files, this is mentioned and explained [here](https://github.com/huggingface/datasets/pull/7815#issuecomment-3496199503)
2. when uploading via the `niftifolder` feature, the resulting parquet only contains relative paths to the nifti files:
```bash
table['nifti']
<pyarrow.lib.ChunkedArray object at 0x798245d37d60>
[
-- is_valid: all not null
-- child 0 type: binary
[
null,
null,
null,
null,
null,
null
]
-- child 1 type: string
[
"/home/tobias/programming/github/datasets/nifti_extracted/T1.nii",
"/home/tobias/programming/github/datasets/nifti_extracted/T2-interleaved.nii",
"/home/tobias/programming/github/datasets/nifti_extracted/T2.nii",
"/home/tobias/programming/github/datasets/nifti_extracted/T2_-interleaved.nii",
"/home/tobias/programming/github/datasets/nifti_extracted/T2_.nii",
"/home/tobias/programming/github/datasets/nifti_extracted/fieldmap.nii"
]
]
```
instead of containing bytes. The code is copy pasted from PDF, so I wonder what is going wrong here.
### Steps to reproduce the bug
see the linked comment
### Expected behavior
downloading should work as smoothly as for pdf
### Environment info
- `datasets` version: 4.4.2.dev0
- Platform: Linux-6.14.0-33-generic-x86_64-with-glibc2.39
- Python version: 3.12.3
- `huggingface_hub` version: 0.35.3
- PyArrow version: 21.0.0
- Pandas version: 2.3.3
- `fsspec` version: 2025.9.0
|
CLOSED
| 2025-11-06T11:46:33
| 2025-11-06T16:20:38
| 2025-11-06T16:20:38
|
https://github.com/huggingface/datasets/issues/7852
|
CloseChoice
| 2
|
[] |
7,842
|
Transform with columns parameter triggers on non-specified column access
|
### Describe the bug
Iterating over a [`Column`](https://github.com/huggingface/datasets/blob/8b1bd4ec1cc9e9ce022f749abb6485ef984ae7c0/src/datasets/arrow_dataset.py#L633-L692) iterates through the parent [`Dataset`](https://github.com/huggingface/datasets/blob/8b1bd4ec1cc9e9ce022f749abb6485ef984ae7c0/src/datasets/arrow_dataset.py#L695) and applies all formatting/transforms on each row, regardless of which column is being accessed. This causes an error when transforms depend on columns not present in the projection.
### Steps to reproduce the bug
### Load a dataset with multiple columns
```python
ds = load_dataset("mrbrobot/isic-2024", split="train")
```
### Define a transform that specifies an input column
```python
def image_transform(batch):
batch["image"] = batch["image"] # KeyError when batch doesn't contain "image"
return batch
# apply transform only to image column
ds = ds.with_format("torch")
ds = ds.with_transform(image_transform, columns=["image"], output_all_columns=True)
```
### Iterate over non-specified column
```python
# iterate over a different column, triggers the transform on each row, but batch doesn't contain "image"
for t in ds["target"]: # KeyError: 'image'
print(t)
```
### Expected behavior
If a user iterates over `ds["target"]` and the transform specifies `columns=["image"]`, the transform should be skipped.
### Environment info
`datasets`: 4.2.0
Python: 3.12.12
Linux: Debian 11.11
|
CLOSED
| 2025-11-03T13:55:27
| 2025-11-03T14:34:13
| 2025-11-03T14:34:13
|
https://github.com/huggingface/datasets/issues/7842
|
mr-brobot
| 0
|
[] |
7,841
|
DOC: `mode` parameter on pdf and video features unused
|
Following up on https://github.com/huggingface/datasets/pull/7840 I asked claude code to check for undocumented parameters for other features and it found:
- mode parameter on video is documented but unused: https://github.com/huggingface/datasets/blob/main/src/datasets/features/video.py#L48-L49
- the same goes for the mode parameter on the pdf feature: https://github.com/huggingface/datasets/blob/main/src/datasets/features/pdf.py#L47-L48
I assume checking if these modes can be supported and otherwise removing them is the way to go here.
|
CLOSED
| 2025-11-02T12:37:47
| 2025-11-05T14:04:04
| 2025-11-05T14:04:04
|
https://github.com/huggingface/datasets/issues/7841
|
CloseChoice
| 1
|
[] |
7,839
|
datasets doesn't work with python 3.14
|
### Describe the bug
Seems that `dataset` doesn't work with python==3.14. The root cause seems to be something with a `deel` API that was changed.
```
TypeError: Pickler._batch_setitems() takes 2 positional arguments but 3 were given
```
### Steps to reproduce the bug
(on a new folder)
uv init
uv python pin 3.14
uv add datasets
uv run python
(in REPL)
import datasets
datasets.load_dataset("cais/mmlu", "all") # will fail on any dataset
```
>>> datasets.load_dataset("cais/mmlu", "all")
Traceback (most recent call last):
File "<python-input-2>", line 1, in <module>
datasets.load_dataset("cais/mmlu", "all")
~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^
File "/Users/zmoshe/temp/test_datasets_py3.14/.venv/lib/python3.14/site-packages/datasets/load.py", line 1397, in load_dataset
builder_instance = load_dataset_builder(
path=path,
...<10 lines>...
**config_kwargs,
)
File "/Users/zmoshe/temp/test_datasets_py3.14/.venv/lib/python3.14/site-packages/datasets/load.py", line 1185, in load_dataset_builder
builder_instance._use_legacy_cache_dir_if_possible(dataset_module)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^
File "/Users/zmoshe/temp/test_datasets_py3.14/.venv/lib/python3.14/site-packages/datasets/builder.py", line 615, in _use_legacy_cache_dir_if_possible
self._check_legacy_cache2(dataset_module) or self._check_legacy_cache() or None
~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^
File "/Users/zmoshe/temp/test_datasets_py3.14/.venv/lib/python3.14/site-packages/datasets/builder.py", line 487, in _check_legacy_cache2
config_id = self.config.name + "-" + Hasher.hash({"data_files": self.config.data_files})
~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/zmoshe/temp/test_datasets_py3.14/.venv/lib/python3.14/site-packages/datasets/fingerprint.py", line 188, in hash
return cls.hash_bytes(dumps(value))
~~~~~^^^^^^^
File "/Users/zmoshe/temp/test_datasets_py3.14/.venv/lib/python3.14/site-packages/datasets/utils/_dill.py", line 120, in dumps
dump(obj, file)
~~~~^^^^^^^^^^^
File "/Users/zmoshe/temp/test_datasets_py3.14/.venv/lib/python3.14/site-packages/datasets/utils/_dill.py", line 114, in dump
Pickler(file, recurse=True).dump(obj)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^
File "/Users/zmoshe/temp/test_datasets_py3.14/.venv/lib/python3.14/site-packages/dill/_dill.py", line 428, in dump
StockPickler.dump(self, obj)
~~~~~~~~~~~~~~~~~^^^^^^^^^^^
File "/Users/zmoshe/.local/uv/python/cpython-3.14.0rc2-macos-aarch64-none/lib/python3.14/pickle.py", line 498, in dump
self.save(obj)
~~~~~~~~~^^^^^
File "/Users/zmoshe/temp/test_datasets_py3.14/.venv/lib/python3.14/site-packages/datasets/utils/_dill.py", line 70, in save
dill.Pickler.save(self, obj, save_persistent_id=save_persistent_id)
~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/zmoshe/temp/test_datasets_py3.14/.venv/lib/python3.14/site-packages/dill/_dill.py", line 422, in save
StockPickler.save(self, obj, save_persistent_id)
~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/zmoshe/.local/uv/python/cpython-3.14.0rc2-macos-aarch64-none/lib/python3.14/pickle.py", line 572, in save
f(self, obj) # Call unbound method with explicit self
~^^^^^^^^^^^
File "/Users/zmoshe/temp/test_datasets_py3.14/.venv/lib/python3.14/site-packages/dill/_dill.py", line 1262, in save_module_dict
StockPickler.save_dict(pickler, obj)
~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^
File "/Users/zmoshe/.local/uv/python/cpython-3.14.0rc2-macos-aarch64-none/lib/python3.14/pickle.py", line 1064, in save_dict
self._batch_setitems(obj.items(), obj)
~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^
TypeError: Pickler._batch_setitems() takes 2 positional arguments but 3 were given
```
### Expected behavior
should work.
### Environment info
datasets==v4.3.0
python==3.14
|
CLOSED
| 2025-11-02T09:09:06
| 2025-11-04T14:02:25
| 2025-11-04T14:02:25
|
https://github.com/huggingface/datasets/issues/7839
|
zachmoshe
| 4
|
[] |
7,837
|
mono parameter to the Audio feature is missing
|
According to the docs, there is a "mono" parameter to the Audio feature, which turns any stereo into mono. In practice the signal is not touched and the mono parameter, even though documented, does not exist.
https://github.com/huggingface/datasets/blob/41c05299348a499807432ab476e1cdc4143c8772/src/datasets/features/audio.py#L52C1-L54C22
|
CLOSED
| 2025-10-31T15:41:39
| 2025-11-03T15:59:18
| 2025-11-03T14:24:12
|
https://github.com/huggingface/datasets/issues/7837
|
ernestum
| 2
|
[] |
7,834
|
Audio.cast_column() or Audio.decode_example() causes Colab kernel crash (std::bad_alloc)
|
### Describe the bug
When using the huggingface datasets.Audio feature to decode a local or remote (public HF dataset) audio file inside Google Colab, the notebook kernel crashes with std::bad_alloc (C++ memory allocation failure).
The crash happens even with a minimal code example and valid .wav file that can be read successfully using soundfile.
Here is a sample Collab notebook to reproduce the problem.
https://colab.research.google.com/drive/1nnb-GC5748Tux3xcYRussCGp2x-zM9Id?usp=sharing
code sample:
```
...
audio_dataset = audio_dataset.cast_column("audio", Audio(sampling_rate=16000))
# Accessing the first element crashes the Colab kernel
print(audio_dataset[0]["audio"])
```
Error log
```
WARNING what(): std::bad_alloc
terminate called after throwing an instance of 'std::bad_alloc'
```
Environment
Platform: Google Colab (Python 3.12.12)
datasets Version: 4.3.0
soundfile Version: 0.13.1
torchaudio Version: 2.8.0+cu126
Thanks in advance to help me on this error I get approx two weeks now after it was working before.
Regards
### Steps to reproduce the bug
https://colab.research.google.com/drive/1nnb-GC5748Tux3xcYRussCGp2x-zM9Id?usp=sharing
### Expected behavior
Loading the audio and decode it.
It should safely return:
{
"path": "path/filaname.wav",
"array": np.ndarray([...]),
"sampling_rate": 16000
}
### Environment info
Environment
Platform: Google Colab (Python 3.12.12)
datasets Version: 4.3.0
soundfile Version: 0.13.1
torchaudio Version: 2.8.0+cu126
|
OPEN
| 2025-10-27T22:02:00
| 2025-11-15T16:28:04
| null |
https://github.com/huggingface/datasets/issues/7834
|
rachidio
| 8
|
[] |
7,832
|
[DOCS][minor] TIPS paragraph not compiled in docs/stream
|
In the client documentation, the markdown 'TIP' paragraph for paragraph in docs/stream#shuffle is not well executed — not as the other in the same page / while markdown is correctly considering it.
Documentation:
https://huggingface.co/docs/datasets/v4.3.0/en/stream#shuffle:~:text=%5B!TIP%5D%5BIterableDataset.shuffle()%5D(/docs/datasets/v4.3.0/en/package_reference/main_classes%23datasets.IterableDataset.shuffle)%20will%20also%20shuffle%20the%20order%20of%20the%20shards%20if%20the%20dataset%20is%20sharded%20into%20multiple%20files.
Github source:
https://github.com/huggingface/datasets/blob/main/docs/source/stream.mdx#:~:text=Casting%20only%20works%20if%20the%20original%20feature%20type%20and%20new%20feature%20type%20are%20compatible.%20For%20example%2C%20you%20can%20cast%20a%20column%20with%20the%20feature%20type%20Value(%27int32%27)%20to%20Value(%27bool%27)%20if%20the%20original%20column%20only%20contains%20ones%20and%20zeros.
|
CLOSED
| 2025-10-27T10:03:22
| 2025-10-27T10:10:54
| 2025-10-27T10:10:54
|
https://github.com/huggingface/datasets/issues/7832
|
art-test-stack
| 0
|
[] |
7,829
|
Memory leak / Large memory usage with num_workers = 0 and numerous dataset within DatasetDict
|
### Describe the bug
Hi team, first off, I love the datasets library! 🥰
I'm encountering a potential memory leak / increasing memory usage when training a model on a very large DatasetDict.
Setup: I have a DatasetDict containing 362 distinct datasets, which sum up to ~2.8 billion rows.
Training Task: I'm performing contrastive learning with SentenceTransformer and Accelerate on a single node with 4 H100, which requires me to sample from only one dataset at a time.
Training Loop: At each training step, I sample ~16,000 examples from a single dataset, and then switch to a different dataset for the next step. I iterate through all 362 datasets this way.
Problem: The process's memory usage continuously increases over time, eventually causing a stale status where GPUs would stop working. It seems memory from previously sampled datasets isn't being released. I've set num_workers=0 for all experiments.
Chart 1: Standard DatasetDict The memory usage grows steadily until it make the training stale (RSS memory) <img width="773" height="719" alt="Image" src="https://github.com/user-attachments/assets/6606bef5-1153-4f2d-bf08-82da249d6e8d" />
Chart 2: IterableDatasetDict I also tried to use IterableDatasetDict and IterableDataset. The memory curve is "smoother," but the result is the same: it grows indefinitely and the training become stale. <img width="339" height="705" alt="Image" src="https://github.com/user-attachments/assets/ee90c1a1-6c3b-4135-9edc-90955cb1695a" />
Any feedback or guidance on how to manage this memory would be greatly appreciated!
### Steps to reproduce the bug
WIP, I'll add some code that manage to reproduce this error, but not straightforward.
### Expected behavior
The memory usage should remain relatively constant or plateau after a few steps. Memory used for sampling one dataset should be released before or during the sampling of the next dataset.
### Environment info
Python: 3.12
Datasets: 4.3.0
SentenceTransformers: 5.1.1
|
OPEN
| 2025-10-24T09:51:38
| 2025-11-06T13:31:26
| null |
https://github.com/huggingface/datasets/issues/7829
|
raphaelsty
| 4
|
[] |
7,821
|
Building a dataset with large variable size arrays results in error ArrowInvalid: Value X too large to fit in C integer type
|
### Describe the bug
I used map to store raw audio waveforms of variable lengths in a column of a dataset the `map` call fails with ArrowInvalid: Value X too large to fit in C integer type.
```
Traceback (most recent call last):
Traceback (most recent call last):
File "...lib/python3.12/site-packages/multiprocess/pool.py", line 125, in worker
result = (True, func(*args, **kwds))
^^^^^^^^^^^^^^^^^^^
File "...lib/python3.12/site-packages/datasets/utils/py_utils.py", line 678, in _write_generator_to_queue
for i, result in enumerate(func(**kwargs)):
^^^^^^^^^^^^^^^^^^^^^^^^^
File "...lib/python3.12/site-packages/datasets/arrow_dataset.py", line 3526, in _map_single
writer.write_batch(batch)
File "...lib/python3.12/site-packages/datasets/arrow_writer.py", line 605, in write_batch
arrays.append(pa.array(typed_sequence))
^^^^^^^^^^^^^^^^^^^^^^^^
File "pyarrow/array.pxi", line 252, in pyarrow.lib.array
File "pyarrow/array.pxi", line 114, in pyarrow.lib._handle_arrow_array_protocol
File "...lib/python3.12/site-packages/datasets/arrow_writer.py", line 225, in __arrow_array__
out = list_of_np_array_to_pyarrow_listarray(data)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "...lib/python3.12/site-packages/datasets/features/features.py", line 1538, in list_of_np_array_to_pyarrow_listarray
return list_of_pa_arrays_to_pyarrow_listarray(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "...lib/python3.12/site-packages/datasets/features/features.py", line 1530, in list_of_pa_arrays_to_pyarrow_listarray
offsets = pa.array(offsets, type=pa.int32())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "pyarrow/array.pxi", line 362, in pyarrow.lib.array
File "pyarrow/array.pxi", line 87, in pyarrow.lib._ndarray_to_array
File "pyarrow/error.pxi", line 92, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: Value 2148479376 too large to fit in C integer type
```
### Steps to reproduce the bug
Calling map on a dataset that returns a column with long 1d numpy arrays of variable length.
Example:
```python
# %%
import logging
import datasets
import pandas as pd
import numpy as np
# %%
def process_batch(batch, rank):
res = []
for _ in batch["id"]:
res.append(np.zeros((2**30)).astype(np.uint16))
return {"audio": res}
if __name__ == "__main__":
df = pd.DataFrame(
{
"id": list(range(400)),
}
)
ds = datasets.Dataset.from_pandas(df)
try:
from multiprocess import set_start_method
set_start_method("spawn")
except RuntimeError:
print("Spawn method already set, continuing...")
mapped_ds = ds.map(
process_batch,
batched=True,
batch_size=2,
with_rank=True,
num_proc=2,
cache_file_name="path_to_cache/tmp.arrow",
writer_batch_size=200,
remove_columns=ds.column_names,
# disable_nullable=True,
)
```
### Expected behavior
I think the offsets should be pa.int64() if needed and not forced to be `pa.int32()`
in https://github.com/huggingface/datasets/blob/3e13d30823f8ec498d56adbc18c6880a5463b313/src/datasets/features/features.py#L1535
### Environment info
- `datasets` version: 3.3.1
- Platform: Linux-5.15.0-94-generic-x86_64-with-glibc2.35
- Python version: 3.12.9
- `huggingface_hub` version: 0.29.0
- PyArrow version: 19.0.1
- Pandas version: 2.2.3
- `fsspec` version: 2024.12.0
|
OPEN
| 2025-10-16T08:45:17
| 2025-10-20T13:42:05
| null |
https://github.com/huggingface/datasets/issues/7821
|
kkoutini
| 1
|
[] |
7,819
|
Cannot download opus dataset
|
When I tried to download opus_books using:
from datasets import load_dataset
dataset = load_dataset("Helsinki-NLP/opus_books")
I got the following errors:
FileNotFoundError: Couldn't find any data file at /workspace/Helsinki-NLP/opus_books. Couldn't find 'Helsinki-NLP/opus_books' on the Hugging Face Hub either: LocalEntryNotFoundError: An error happened while trying to locate the file on the Hub and we cannot find the requested files in the local cache. Please check your connection and try again or make sure your Internet connection is on.
I also tried:
dataset = load_dataset("opus_books", "en-zh")
and the errors remain the same. However, I can download "mlabonne/FineTome-100k" successfully.
My datasets is version 4.2.0
Any clues? Big thanks.
|
OPEN
| 2025-10-15T09:06:19
| 2025-10-20T13:45:16
| null |
https://github.com/huggingface/datasets/issues/7819
|
liamsun2019
| 1
|
[] |
7,818
|
train_test_split and stratify breaks with Numpy 2.0
|
### Describe the bug
As stated in the title, since Numpy changed in version >2.0 with copy, the stratify parameters break.
e.g. `all_dataset.train_test_split(test_size=0.2,stratify_by_column="label")` returns a Numpy error.
It works if you downgrade Numpy to a version lower than 2.0.
### Steps to reproduce the bug
1. Numpy > 2.0
2. `all_dataset.train_test_split(test_size=0.2,stratify_by_column="label")`
### Expected behavior
It returns a stratified split as per the results of Numpy < 2.0
### Environment info
- `datasets` version: 2.14.4
- Platform: Linux-6.8.0-85-generic-x86_64-with-glibc2.35
- Python version: 3.13.7
- Huggingface_hub version: 0.34.4
- PyArrow version: 19.0.0
- Pandas version: 2.3.2
|
CLOSED
| 2025-10-15T00:01:19
| 2025-10-28T16:10:44
| 2025-10-28T16:10:44
|
https://github.com/huggingface/datasets/issues/7818
|
davebulaval
| 3
|
[] |
7,816
|
disable_progress_bar() not working as expected
|
### Describe the bug
Hi,
I'm trying to load a dataset on Kaggle TPU image. There is some known compat issue with progress bar on Kaggle, so I'm trying to disable the progress bar globally. This does not work as you can see in [here](https://www.kaggle.com/code/windmaple/hf-datasets-issue).
In contract, disabling progress bar for snapshot_download() works as expected as in [here](https://www.kaggle.com/code/windmaple/snapshot-download-error).
### Steps to reproduce the bug
See this [notebook](https://www.kaggle.com/code/windmaple/hf-datasets-issue).
There is sth. wrong with `shell_paraent`.
### Expected behavior
The downloader should disable progress bar and move forward w/ no error.
### Environment info
The latest version as I did:
!pip install -U datasets ipywidgets ipykernel
|
CLOSED
| 2025-10-14T03:25:39
| 2025-10-14T23:49:26
| 2025-10-14T23:49:26
|
https://github.com/huggingface/datasets/issues/7816
|
windmaple
| 2
|
[] |
7,813
|
Caching does not work when using python3.14
|
### Describe the bug
Traceback (most recent call last):
File "/workspace/ctn.py", line 8, in <module>
ds = load_dataset(f"naver-clova-ix/synthdog-{lang}") # или "synthdog-zh" для китайского
File "/workspace/.venv/lib/python3.14/site-packages/datasets/load.py", line 1397, in load_dataset
builder_instance = load_dataset_builder(
path=path,
...<10 lines>...
**config_kwargs,
)
File "/workspace/.venv/lib/python3.14/site-packages/datasets/load.py", line 1185, in load_dataset_builder
builder_instance._use_legacy_cache_dir_if_possible(dataset_module)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^
File "/workspace/.venv/lib/python3.14/site-packages/datasets/builder.py", line 612, in _use_legacy_cache_dir_if_possible
self._check_legacy_cache2(dataset_module) or self._check_legacy_cache() or None
~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^
File "/workspace/.venv/lib/python3.14/site-packages/datasets/builder.py", line 485, in _check_legacy_cache2
config_id = self.config.name + "-" + Hasher.hash({"data_files": self.config.data_files})
~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/workspace/.venv/lib/python3.14/site-packages/datasets/fingerprint.py", line 188, in hash
return cls.hash_bytes(dumps(value))
~~~~~^^^^^^^
File "/workspace/.venv/lib/python3.14/site-packages/datasets/utils/_dill.py", line 120, in dumps
dump(obj, file)
~~~~^^^^^^^^^^^
File "/workspace/.venv/lib/python3.14/site-packages/datasets/utils/_dill.py", line 114, in dump
Pickler(file, recurse=True).dump(obj)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^
File "/workspace/.venv/lib/python3.14/site-packages/dill/_dill.py", line 428, in dump
StockPickler.dump(self, obj)
~~~~~~~~~~~~~~~~~^^^^^^^^^^^
File "/usr/lib/python3.14/pickle.py", line 498, in dump
self.save(obj)
~~~~~~~~~^^^^^
File "/workspace/.venv/lib/python3.14/site-packages/datasets/utils/_dill.py", line 70, in save
dill.Pickler.save(self, obj, save_persistent_id=save_persistent_id)
~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/workspace/.venv/lib/python3.14/site-packages/dill/_dill.py", line 422, in save
StockPickler.save(self, obj, save_persistent_id)
~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.14/pickle.py", line 572, in save
f(self, obj) # Call unbound method with explicit self
~^^^^^^^^^^^
File "/workspace/.venv/lib/python3.14/site-packages/dill/_dill.py", line 1262, in save_module_dict
StockPickler.save_dict(pickler, obj)
~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^
File "/usr/lib/python3.14/pickle.py", line 1064, in save_dict
self._batch_setitems(obj.items(), obj)
~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^
TypeError: Pickler._batch_setitems() takes 2 positional arguments but 3 were given
### Steps to reproduce the bug
ds_train = ds["train"].map(lambda x: {**x, "lang": lang})
### Expected behavior
Fixed bugs
### Environment info
- `datasets` version: 4.2.0
- Platform: Linux-6.8.0-85-generic-x86_64-with-glibc2.39
- Python version: 3.14.0
- `huggingface_hub` version: 0.35.3
- PyArrow version: 21.0.0
- Pandas version: 2.3.3
- `fsspec` version: 2025.9.0
|
CLOSED
| 2025-10-10T15:36:46
| 2025-10-27T17:08:26
| 2025-10-27T17:08:26
|
https://github.com/huggingface/datasets/issues/7813
|
intexcor
| 2
|
[] |
7,811
|
SIGSEGV when Python exits due to near null deref
|
### Describe the bug
When I run the following python script using datasets I get a segfault.
```python
from datasets import load_dataset
from tqdm import tqdm
progress_bar = tqdm(total=(1000), unit='cols', desc='cols ')
progress_bar.update(1)
```
```
% lldb -- python3 crashmin.py
(lldb) target create "python3"
Current executable set to '/Users/ian/bug/venv/bin/python3' (arm64).
(lldb) settings set -- target.run-args "crashmin.py"
(lldb) r
Process 8095 launched: '/Users/ian/bug/venv/bin/python3' (arm64)
Process 8095 stopped
* thread #2, stop reason = exec
frame #0: 0x0000000100014b30 dyld`_dyld_start
dyld`_dyld_start:
-> 0x100014b30 <+0>: mov x0, sp
0x100014b34 <+4>: and sp, x0, #0xfffffffffffffff0
0x100014b38 <+8>: mov x29, #0x0 ; =0
Target 0: (Python) stopped.
(lldb) c
Process 8095 resuming
cols : 0% 0/1000 [00:00<?, ?cols/s]Process 8095 stopped
* thread #2, queue = 'com.apple.main-thread', stop reason = EXC_BAD_ACCESS (code=1, address=0x10)
frame #0: 0x0000000101783454 _datetime.cpython-313-darwin.so`delta_new + 188
_datetime.cpython-313-darwin.so`delta_new:
-> 0x101783454 <+188>: ldr x3, [x20, #0x10]
0x101783458 <+192>: adrp x0, 10
0x10178345c <+196>: add x0, x0, #0x6fc ; "seconds"
Target 0: (Python) stopped.
(lldb) bt
* thread #2, queue = 'com.apple.main-thread', stop reason = EXC_BAD_ACCESS (code=1, address=0x10)
* frame #0: 0x0000000101783454 _datetime.cpython-313-darwin.so`delta_new + 188
frame #1: 0x0000000100704b60 Python`type_call + 96
frame #2: 0x000000010067ba34 Python`_PyObject_MakeTpCall + 120
frame #3: 0x00000001007aae3c Python`_PyEval_EvalFrameDefault + 30236
frame #4: 0x000000010067c900 Python`PyObject_CallOneArg + 112
frame #5: 0x000000010070f0a0 Python`slot_tp_finalize + 116
frame #6: 0x000000010070c3b4 Python`subtype_dealloc + 788
frame #7: 0x00000001006c378c Python`insertdict + 756
frame #8: 0x00000001006db2b0 Python`_PyModule_ClearDict + 660
frame #9: 0x000000010080a9a8 Python`finalize_modules + 1772
frame #10: 0x0000000100809a44 Python`_Py_Finalize + 264
frame #11: 0x0000000100837630 Python`Py_RunMain + 252
frame #12: 0x0000000100837ef8 Python`pymain_main + 304
frame #13: 0x0000000100837f98 Python`Py_BytesMain + 40
frame #14: 0x000000019cfcc274 dyld`start + 2840
(lldb) register read x20
x20 = 0x0000000000000000
(lldb)
```
### Steps to reproduce the bug
Run the script above, and observe the segfault.
### Expected behavior
No segfault
### Environment info
```
% pip freeze datasets | grep -i datasets
datasets==4.2.0
(venv) 0 ~/bug 14:58:06
% pip freeze tqdm | grep -i tqdm
tqdm==4.67.1
(venv) 0 ~/bug 14:58:16
% python --version
Python 3.13.7
```
|
OPEN
| 2025-10-09T22:00:11
| 2025-10-10T22:09:24
| null |
https://github.com/huggingface/datasets/issues/7811
|
iankronquist
| 4
|
[] |
7,804
|
Support scientific data formats
|
List of formats and libraries we can use to load the data in `datasets`:
- [ ] DICOMs: pydicom
- [x] NIfTIs: nibabel
- [ ] WFDB: wfdb
cc @zaRizk7 for viz
Feel free to comment / suggest other formats and libs you'd like to see or to share your interest in one of the mentioned format
|
OPEN
| 2025-10-09T10:18:24
| 2025-11-26T16:09:43
| null |
https://github.com/huggingface/datasets/issues/7804
|
lhoestq
| 18
|
[] |
7,802
|
[Docs] Missing documentation for `Dataset.from_dict`
|
Documentation link: https://huggingface.co/docs/datasets/en/package_reference/main_classes
Link to method (docstring present): https://github.com/huggingface/datasets/blob/6f2502c5a026caa89839713f6f7c8b958e5e83eb/src/datasets/arrow_dataset.py#L1029
The docstring is present for the function, but seems missing from the official documentation for the `Dataset` class on HuggingFace.
The method in question:
```python
@classmethod
def from_dict(
cls,
mapping: dict,
features: Optional[Features] = None,
info: Optional[DatasetInfo] = None,
split: Optional[NamedSplit] = None,
) -> "Dataset":
"""
Convert `dict` to a `pyarrow.Table` to create a [`Dataset`].
Important: a dataset created with from_dict() lives in memory
and therefore doesn't have an associated cache directory.
This may change in the future, but in the meantime if you
want to reduce memory usage you should write it back on disk
and reload using e.g. save_to_disk / load_from_disk.
Args:
mapping (`Mapping`):
Mapping of strings to Arrays or Python lists.
features ([`Features`], *optional*):
Dataset features.
info (`DatasetInfo`, *optional*):
Dataset information, like description, citation, etc.
split (`NamedSplit`, *optional*):
Name of the dataset split.
Returns:
[`Dataset`]
"""
```
|
OPEN
| 2025-10-09T02:54:41
| 2025-10-19T16:09:33
| null |
https://github.com/huggingface/datasets/issues/7802
|
aaronshenhao
| 2
|
[] |
7,798
|
Audio dataset is not decoding on 4.1.1
|
### Describe the bug
The audio column remain as non-decoded objects even when accessing them.
```python
dataset = load_dataset("MrDragonFox/Elise", split = "train")
dataset[0] # see that it doesn't show 'array' etc...
```
Works fine with `datasets==3.6.0`
Followed the docs in
- https://huggingface.co/docs/datasets/en/audio_load
### Steps to reproduce the bug
```python
dataset = load_dataset("MrDragonFox/Elise", split = "train")
dataset[0] # see that it doesn't show 'array' etc...
```
### Expected behavior
It should decode when accessing the elemenet
### Environment info
4.1.1
ubuntu 22.04
Related
- https://github.com/huggingface/datasets/issues/7707
|
OPEN
| 2025-10-05T06:37:50
| 2025-10-06T14:07:55
| null |
https://github.com/huggingface/datasets/issues/7798
|
thewh1teagle
| 3
|
[] |
7,793
|
Cannot load dataset, fails with nested data conversions not implemented for chunked array outputs
|
### Describe the bug
Hi! When I load this dataset, it fails with a pyarrow error. I'm using datasets 4.1.1, though I also see this with datasets 4.1.2
To reproduce:
```
import datasets
ds = datasets.load_dataset(path="metr-evals/malt-public", name="irrelevant_detail")
```
Error:
```
Traceback (most recent call last):
File "/Users/neev/scratch/.venv/lib/python3.13/site-packages/datasets/builder.py", line 1815, in _prepare_split_single
for _, table in generator:
^^^^^^^^^
File "/Users/neev/scratch/.venv/lib/python3.13/site-packages/datasets/packaged_modules/parquet/parquet.py", line 93, in _generate_tables
for batch_idx, record_batch in enumerate(
~~~~~~~~~^
parquet_fragment.to_batches(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
...<5 lines>...
)
^
):
^
File "pyarrow/_dataset.pyx", line 3904, in _iterator
File "pyarrow/_dataset.pyx", line 3494, in pyarrow._dataset.TaggedRecordBatchIterator.__next__
File "pyarrow/error.pxi", line 155, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 92, in pyarrow.lib.check_status
pyarrow.lib.ArrowNotImplementedError: Nested data conversions not implemented for chunked array outputs
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/Users/neev/scratch/test_hf.py", line 3, in <module>
ds = datasets.load_dataset(path="metr-evals/malt-public", name="irrelevant_detail")
File "/Users/neev/scratch/.venv/lib/python3.13/site-packages/datasets/load.py", line 1412, in load_dataset
builder_instance.download_and_prepare(
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
download_config=download_config,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
...<3 lines>...
storage_options=storage_options,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/Users/neev/scratch/.venv/lib/python3.13/site-packages/datasets/builder.py", line 894, in download_and_prepare
self._download_and_prepare(
~~~~~~~~~~~~~~~~~~~~~~~~~~^
dl_manager=dl_manager,
^^^^^^^^^^^^^^^^^^^^^^
...<2 lines>...
**download_and_prepare_kwargs,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/Users/neev/scratch/.venv/lib/python3.13/site-packages/datasets/builder.py", line 970, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/neev/scratch/.venv/lib/python3.13/site-packages/datasets/builder.py", line 1702, in _prepare_split
for job_id, done, content in self._prepare_split_single(
~~~~~~~~~~~~~~~~~~~~~~~~~~^
gen_kwargs=gen_kwargs, job_id=job_id, **_prepare_split_args
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
):
^
File "/Users/neev/scratch/.venv/lib/python3.13/site-packages/datasets/builder.py", line 1858, in _prepare_split_single
raise DatasetGenerationError("An error occurred while generating the dataset") from e
datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset
```
### Steps to reproduce the bug
To reproduce:
```
import datasets
ds = datasets.load_dataset(path="metr-evals/malt-public", name="irrelevant_detail")
```
### Expected behavior
The dataset loads
### Environment info
Datasets: 4.1.1
Python: 3.13
Platform: Macos
|
OPEN
| 2025-09-27T01:03:12
| 2025-09-27T21:35:31
| null |
https://github.com/huggingface/datasets/issues/7793
|
neevparikh
| 1
|
[] |
7,792
|
Concatenate IterableDataset instances and distribute underlying shards in a RoundRobin manner
|
### Feature request
I would like to be able to concatenate multiple `IterableDataset` with possibly different features. I would like to then be able to stream the results in parallel (both using DDP and multiple workers in the pytorch DataLoader). I want the merge of datasets to be well balanced between the different processes.
### Motivation
I want to train a model on a combination of datasets, which I can convert to a single representation. This applies to converting different datasets items to the same Python class, as using a tokenizer on multiple modalities.
Assuming that my original datasets are not necessarily well balanced as they may have different size and thus different number of shards, I would like the merged dataset to be distributed evenly over the multiple processes. I don't mind if it's not perfectly balanced, and as result, some workers of the torch DataLoader do nothing, as long as the DDP is properly handled causing no deadlock.
### What I've tried
I've tried the two functions already provided in datasets, namely `interleave_datasets` and `concatenate_datasets`.
- Interleave seems to be the best approach of what I'm trying to do. However, it doesn't suit my purpose because as I understand it, it stops as soon as one of the dataset source is exhausted, or repeat the smallest source items until the largest is exhausted. I would like something in-between, similarly to what [roundrobin does](https://more-itertools.readthedocs.io/en/stable/api.html#more_itertools.roundrobin).
- Concatenate does not mix the data enough and one dataset may be overrepresented in some early batches.
Let's consider we have 3 datasets composed of different number of shards as follow [[s0_0, s0_1], [s1_0], [s2_0, s2_1, s2_3]], where s denotes the underlying shard, the first index the dataset and the second the shard number.
If we request 3 shards in the `shard_data_source` we should obtain the following:
index 0 gets s0_0 s2_0
index 1 gets s0_1 s2_1
index 2 gets s1_0 s2_3
I started implementing the following, but I'm afraid my sharding logic is incorrect.
```python
from copy import deepcopy
from itertools import chain, islice
import datasets
import numpy as np
from datasets import IterableDataset
from datasets.iterable_dataset import _BaseExamplesIterable
from more_itertools import roundrobin
class MixMultiSourcesExampleIterable(_BaseExamplesIterable):
def __init__(self, ex_iterables: list[_BaseExamplesIterable]):
super().__init__()
self.ex_iterables = ex_iterables
def _init_state_dict(self) -> dict:
self._state_dict = {
"ex_iterables": [ex_iterable._init_state_dict() for ex_iterable in self.ex_iterables],
"type": self.__class__.__name__,
}
return self._state_dict
@property
def num_shards(self) -> int:
return sum(ex_iterable.num_shards for ex_iterable in self.ex_iterables)
def __iter__(self):
yield from roundrobin(*self.ex_iterables)
def shuffle_data_sources(self, generator: np.random.Generator) -> "MixMultiSourcesExampleIterable":
"""Shuffle the list of examples iterable, as well as each underlying examples iterable."""
rng = deepcopy(generator)
ex_iterables = list(self.ex_iterables)
rng.shuffle(ex_iterables)
ex_iterables = [ex_iterable.shuffle_data_sources(generator) for ex_iterable in ex_iterables]
return MixMultiSourcesExampleIterable(ex_iterables)
def shard_data_sources(self, num_shards: int, index: int, contiguous=True) -> "MixMultiSourceExampleIterable":
"""Shard the underlying iterables in a roundrobin manner.
Let's consider we have our iterables as [[s0_0, s0_1], [s1_0], [s2_0, s2_1, s2_3]],
and we request 3 shards.
index 0 gets s0_0 s2_0
index 1 gets s0_1 s2_1
index 2 gets s1_0 s2_3
"""
return MixMultiSourcesExampleIterable(
list(
islice(
# flatten all underlying iterables
chain.from_iterable([ex_iterable.shard_data_sources(1, 0) for ex_iterable in self.ex_iterables]),
# offset the starting point by the index
index,
# take over the full list, so exhaust the iterators
None,
# step by the number of shards requested
num_shards,
)
)
)
def mix_dataset(iterable_datasets: list[datasets.IterableDataset]) -> IterableDataset:
ex_iterable = MixMultiSourcesExampleIterable([ds._ex_iterable for ds in iterable_datasets])
return IterableDataset(
ex_iterable, distributed=iterable_datasets[0]._distributed, formatting=iterable_datasets[0]._formatting
)
```
### Questions
- Am I missing something? Is there a way to use `interleave_datasets` or `concatenate_datasets` to fit my purpose?
- Would it be the right approach to spread the maximum number of underlying shards across my different processes?
### Your contribution
As much as I can.
|
CLOSED
| 2025-09-26T10:05:19
| 2025-10-15T18:05:23
| 2025-10-15T18:05:23
|
https://github.com/huggingface/datasets/issues/7792
|
LTMeyer
| 17
|
[
"enhancement"
] |
7,788
|
`Dataset.to_sql` doesn't utilize `num_proc`
|
The underlying `SqlDatasetWriter` has `num_proc` as an available argument [here](https://github.com/huggingface/datasets/blob/5dc1a179783dff868b0547c8486268cfaea1ea1f/src/datasets/io/sql.py#L63) , but `Dataset.to_sql()` does not accept it, therefore it is always using one process for the SQL conversion.
|
OPEN
| 2025-09-24T20:34:47
| 2025-09-24T20:35:01
| null |
https://github.com/huggingface/datasets/issues/7788
|
tcsmaster
| 0
|
[] |
7,780
|
BIGPATENT dataset inaccessible (deprecated script loader)
|
dataset: https://huggingface.co/datasets/NortheasternUniversity/big_patent
When I try to load it with the datasets library, it fails with:
RuntimeError: Dataset scripts are no longer supported, but found big_patent.py
Could you please publish a Parquet/Arrow export of BIGPATENT on the Hugging Face so that it can be accessed with datasets>=4.x.
|
CLOSED
| 2025-09-18T08:25:34
| 2025-09-25T14:36:13
| 2025-09-25T14:36:13
|
https://github.com/huggingface/datasets/issues/7780
|
ishmaifan
| 2
|
[] |
7,777
|
push_to_hub not overwriting but stuck in a loop when there are existing commits
|
### Describe the bug
`get_deletions_and_dataset_card` stuck at error a commit has happened error since push to hub for http error 412 for tag 4.1.0. The error does not exists in 4.0.0.
### Steps to reproduce the bug
Create code to use push_to_hub, ran twice each time with different content for datasets.Dataset.
The code will stuck in time.sleep loop for `get_deletions_and_dataset_card`. If error is explicitly printed, the error is HTTP 412.
### Expected behavior
New datasets overwrite existing one on repo.
### Environment info
datasets 4.1.0
|
CLOSED
| 2025-09-17T03:15:35
| 2025-09-17T19:31:14
| 2025-09-17T19:31:14
|
https://github.com/huggingface/datasets/issues/7777
|
Darejkal
| 4
|
[] |
7,772
|
Error processing scalar columns using tensorflow.
|
`datasets==4.0.0`
```
columns_to_return = ['input_ids','attention_mask', 'start_positions', 'end_positions']
train_ds.set_format(type='tf', columns=columns_to_return)
```
`train_ds`:
```
train_ds type: <class 'datasets.arrow_dataset.Dataset'>, shape: (1000, 9)
columns: ['question', 'sentences', 'answer', 'str_idx', 'end_idx', 'input_ids', 'attention_mask', 'start_positions', 'end_positions']
features:{'question': Value('string'), 'sentences': Value('string'), 'answer': Value('string'), 'str_idx': Value('int64'), 'end_idx': Value('int64'), 'input_ids': List(Value('int32')), 'attention_mask': List(Value('int8')), 'start_positions': Value('int64'), 'end_positions': Value('int64')}
```
`train_ds_tensor = train_ds['start_positions'].to_tensor(shape=(-1,1))` hits the following error:
```
AttributeError: 'Column' object has no attribute 'to_tensor'
```
`tf.reshape(train_ds['start_positions'], shape=[-1,1])` hits the following error:
```
TypeError: Scalar tensor has no `len()`
```
|
OPEN
| 2025-09-15T10:36:31
| 2025-09-27T08:22:44
| null |
https://github.com/huggingface/datasets/issues/7772
|
khteh
| 2
|
[] |
7,767
|
Custom `dl_manager` in `load_dataset`
|
### Feature request
https://github.com/huggingface/datasets/blob/4.0.0/src/datasets/load.py#L1411-L1418
```
def load_dataset(
...
dl_manager: Optional[DownloadManager] = None, # add this new argument
**config_kwargs,
) -> Union[DatasetDict, Dataset, IterableDatasetDict, IterableDataset]:
...
# Create a dataset builder
builder_instance = load_dataset_builder(
path=path,
name=name,
data_dir=data_dir,
data_files=data_files,
cache_dir=cache_dir,
features=features,
download_config=download_config,
download_mode=download_mode,
revision=revision,
token=token,
storage_options=storage_options,
**config_kwargs,
)
# Return iterable dataset in case of streaming
if streaming:
return builder_instance.as_streaming_dataset(split=split)
# Note: This is the revised part
if dl_manager is None:
if download_config is None:
download_config = DownloadConfig(
cache_dir=builder_instance._cache_downloaded_dir,
force_download=download_mode == DownloadMode.FORCE_REDOWNLOAD,
force_extract=download_mode == DownloadMode.FORCE_REDOWNLOAD,
use_etag=False,
num_proc=num_proc,
token=builder_instance.token,
storage_options=builder_instance.storage_options,
) # We don't use etag for data files to speed up the process
dl_manager = DownloadManager(
dataset_name=builder_instance.dataset_name,
download_config=download_config,
data_dir=builder_instance.config.data_dir,
record_checksums=(
builder_instance._record_infos or verification_mode == VerificationMode.ALL_CHECKS
),
)
# Download and prepare data
builder_instance.download_and_prepare(
download_config=download_config,
download_mode=download_mode,
verification_mode=verification_mode,
dl_manager=dl_manager, # pass the new argument
num_proc=num_proc,
storage_options=storage_options,
)
...
```
### Motivation
In my case, I'm hoping to deal with the cache files downloading manually (not using hash filenames and save to another location, or using potential existing local files).
### Your contribution
It's already implemented above. If maintainers think this should be considered, I'll open a PR.
|
OPEN
| 2025-09-12T19:06:23
| 2025-09-12T19:07:52
| null |
https://github.com/huggingface/datasets/issues/7767
|
ain-soph
| 0
|
[
"enhancement"
] |
7,766
|
cast columns to Image/Audio/Video with `storage_options`
|
### Feature request
Allow `storage_options` to be passed in
1. `cast` related operations (e.g., `cast_columns, cast`)
2. `info` related reading (e.g., `from_dict, from_pandas, from_polars`) together with `info.features`
```python3
import datasets
image_path = "s3://bucket/sample.png"
dataset = datasets.Dataset.from_dict({"image_path": [image_path]})
# dataset = dataset.cast_column("image_path", datasets.Image()) # now works without `storage_options`
# expected behavior
dataset = dataset.cast_column("image_path", datasets.Image(), storage_options={"anon": True})
```
### Motivation
I'm using my own registered fsspec filesystem (s3 with customized local cache support). I need to pass cache folder paths `cache_dirs: list[str]` to the filesystem when I read the remote images (cast from file_paths).
### Your contribution
Could help with a PR at weekends
|
OPEN
| 2025-09-12T18:51:01
| 2025-09-27T08:14:47
| null |
https://github.com/huggingface/datasets/issues/7766
|
ain-soph
| 5
|
[
"enhancement"
] |
7,765
|
polars dataset cannot cast column to Image/Audio/Video
|
### Describe the bug
`from_polars` dataset cannot cast column to Image/Audio/Video, while it works on `from_pandas` and `from_dict`
### Steps to reproduce the bug
```python3
import datasets
import pandas as pd
import polars as pl
image_path = "./sample.png"
# polars
df = pl.DataFrame({"image_path": [image_path]})
dataset = datasets.Dataset.from_polars(df)
dataset = dataset.cast_column("image_path", datasets.Image())
# # raises Error
pyarrow.lib.ArrowNotImplementedError: Unsupported cast from large_string to struct using function cast_struct
# pandas
df = pd.DataFrame({"image_path": [image_path]})
dataset = datasets.Dataset.from_pandas(df)
dataset = dataset.cast_column("image_path", datasets.Image())
# # pass
{'image_path': <PIL.PngImagePlugin.PngImageFile image mode=RGB size=338x277 at 0x7FBA719D4050>}
# dict
dataset = datasets.Dataset.from_dict({"image_path": [image_path]})
dataset = dataset.cast_column("image_path", datasets.Image())
# # pass
{'image_path': <PIL.PngImagePlugin.PngImageFile image mode=RGB size=338x277 at 0x7FBA719D4050>}
```
### Expected behavior
`from_polars` case shouldn't raise error and have the same outputs as `from_pandas` and `from_dict`
### Environment info
```
# Name Version Build Channel
datasets 4.0.0 pypi_0 pypi
pandas 2.3.1 pypi_0 pypi
polars 1.32.3 pypi_0 pypi
```
|
CLOSED
| 2025-09-12T18:32:49
| 2025-10-13T14:39:48
| 2025-10-13T14:39:48
|
https://github.com/huggingface/datasets/issues/7765
|
ain-soph
| 2
|
[] |
7,760
|
Hugging Face Hub Dataset Upload CAS Error
|
### Describe the bug
Experiencing persistent 401 Unauthorized errors when attempting to upload datasets to Hugging Face Hub using the `datasets` library. The error occurs specifically with the CAS (Content Addressable Storage) service during the upload process. Tried using HF_HUB_DISABLE_XET=1. It seems to work for smaller files.
Exact error message :
```
Processing Files (0 / 0) : | | 0.00B / 0.00B 2025-09-10T09:44:35.657565Z ERROR Fatal Error: "cas::upload_xorb" api call failed (request id 01b[...]XXX): HTTP status client error (401 Unauthorized) for url (https://cas-server.xethub.hf.co/xorb/default/7f3abdc[...]XXX)
at /home/runner/work/xet-core/xet-core/cas_client/src/retry_wrapper.rs:113
Processing Files (0 / 0) : 0%| | 0.00B / 184kB, 0.00B/s
New Data Upload : 0%| | 0.00B / 184kB, 0.00B/s
❌ Failed to push some_dataset: Data processing error: CAS service error : Reqwest Error: HTTP status client error (401 Unauthorized), domain: https://cas-server.xethub.hf.co/xorb/default/7f3abdc[...]XXX
```
Workaround Attempts
1. **Disabled XET**: Set `HF_HUB_DISABLE_XET=1` environment variable
2. **Updated hf-xet**: Use `hf-xet==1.1.9` rather than latest
3. **Verified Authentication**: Confirmed HF token is valid and has write permissions
4. **Tested with Smaller Datasets**:
- 100 samples: ✅ **SUCCESS** (uploaded successfully)
- 10,000 samples: ❌ **FAILS** (401 Unauthorized)
### Steps to reproduce the bug
```python
from datasets import Dataset, DatasetDict
# Create dataset (example with 10,000 samples)
dataset = Dataset.from_dict({
"question": questions,
"answer": answers,
# ... other fields
})
# Split into train/test
dataset_dict = dataset.train_test_split(test_size=0.1)
# Upload to Hub
dataset_dict.push_to_hub("Org/some-dataset")
```
### Expected behavior
## Expected Behavior
- Dataset should upload successfully to Hugging Face Hub
- Progress bars should complete without authentication errors
- Dataset should be accessible at the specified repository URL
## Actual Behavior
- Upload fails consistently with 401 Unauthorized error
- Error occurs specifically during CAS service interaction
- No progress is made on the upload (0% completion)
- Dataset is created on Hugging Face Hub with no data folder
### Environment info
- **Platform**: SageMaker (AWS)
- **Python Version**: 3.12
- **Libraries**:
- `datasets` library (latest version)
- `hf-xet==1.1.9` (attempted fix)
- **Authentication**: Hugging Face token configured
- **Dataset Size**: ~10,000 samples, works for smaller sizes (e.g. 100)
|
OPEN
| 2025-09-10T10:01:19
| 2025-09-16T20:01:36
| null |
https://github.com/huggingface/datasets/issues/7760
|
n-bkoe
| 4
|
[] |
7,759
|
Comment/feature request: Huggingface 502s from GHA
|
This is no longer a pressing issue, but for completeness I am reporting that in August 26th, GET requests to `https://datasets-server.huggingface.co/info\?dataset\=livebench/math` were returning 502s when invoked from [github actions](https://github.com/UKGovernmentBEIS/inspect_evals/actions/runs/17241892475/job/48921123754) (that link will expire eventually, [here are the logs](https://github.com/user-attachments/files/22233578/logs_44225296943.zip)).
When invoked from actions, it appeared to be consistently failing for ~6 hours. However, these 502s never occurred when the request was invoked from my local machine in that same time period.
I suspect that this is related to how the requests are routed with github actions versus locally.
Its not clear to me if the request even reached huggingface servers or if its the github proxy that stopped it from going through, but I wanted to report it nonetheless in case this is helpful information. I'm curious if huggingface can do anything on their end to confirm cause.
And a feature request for if this happens in the future (assuming huggingface has visibilty on it): A "datasets status" page highlighting if 502s occur for specific individual datasets could be useful for people debugging on the other end of this!
|
OPEN
| 2025-09-09T11:59:20
| 2025-09-09T13:02:28
| null |
https://github.com/huggingface/datasets/issues/7759
|
Scott-Simmons
| 0
|
[] |
7,758
|
Option for Anonymous Dataset link
|
### Feature request
Allow for anonymized viewing of datasets. For instance, something similar to [Anonymous GitHub](https://anonymous.4open.science/).
### Motivation
We generally publish our data through Hugging Face. This has worked out very well as it's both our repository and archive (thanks to the DOI feature!). However, we have an increasing challenge when it comes to sharing our datasets for paper (both conference and journal) submissions. Due to the need to share data anonymously, we can't use the Hugging Face URLs, but datasets tend to be too large for inclusion as a zip. Being able to have an anonymous link would be great since we can't be double-publishing the data.
### Your contribution
Sorry, I don't have a contribution to make to the implementation of this. Perhaps it would be possible to work off the [Anonymous GitHub](https://github.com/tdurieux/anonymous_github) code to generate something analogous with pointers to the data still on Hugging Face's servers (instead of the duplication of data required for the GitHub version)?
|
OPEN
| 2025-09-08T20:20:10
| 2025-09-08T20:20:10
| null |
https://github.com/huggingface/datasets/issues/7758
|
egrace479
| 0
|
[
"enhancement"
] |
7,757
|
Add support for `.conll` file format in datasets
|
### Feature request
I’d like to request native support in the Hugging Face datasets library for reading .conll files (CoNLL format). This format is widely used in NLP tasks, especially for Named Entity Recognition (NER), POS tagging, and other token classification problems.
Right now `.conll` datasets need to be manually parsed or preprocessed before being loaded into datasets. Having built in support would save time and make workflows smoother for researchers and practitioners.
I propose -
Add a conll dataset builder or file parser to datasets that can:
- Read `.conll` files with customizable delimiters (space, tab).
- Handle sentence/document boundaries (typically indicated by empty lines).
- Support common CoNLL variants (e.g., CoNLL-2000 chunking, CoNLL-2003 NER).
- Output a dataset where each example contains:
- tokens: list of strings
- tags (or similar): list of labels aligned with tokens
Given a .conll snippet like:
```
EU NNP B-ORG
rejects VBZ O
German JJ B-MISC
call NN O
. . O
```
The dataset should load as:
```
{
"tokens": ["EU", "rejects", "German", "call", "."],
"tags": ["B-ORG", "O", "B-MISC", "O", "O"]
}
```
### Motivation
- CoNLL files are a standard benchmark format in NLP (e.g., CoNLL-2003, CoNLL-2000).
- Many users train NER or sequence labeling models (like BERT for token classification) directly on `.conll`
- Right now you have to write your own parsing scripts. Built in support would unify this process and would be much more convenient
### Your contribution
I’d be happy to contribute by implementing this feature. My plan is to-
- Add a new dataset script (conll.py) to handle .conll files.
- Implement parsing logic that supports sentence/document boundaries and token-label alignment.
- Write unit tests with small `.conll` examples to ensure correctness.
- Add documentation and usage examples so new users can easily load `.conll` datasets.
This would be my first open source contribution, so I’ll follow the `CONTRIBUTING.md` guidelines closely and adjust based on feedback from the maintainers.
|
OPEN
| 2025-09-06T07:25:39
| 2025-09-10T14:22:48
| null |
https://github.com/huggingface/datasets/issues/7757
|
namesarnav
| 1
|
[
"enhancement"
] |
7,756
|
datasets.map(f, num_proc=N) hangs with N>1 when run on import
|
### Describe the bug
If you `import` a module that runs `datasets.map(f, num_proc=N)` at the top-level, Python hangs.
### Steps to reproduce the bug
1. Create a file that runs datasets.map at the top-level:
```bash
cat <<EOF > import_me.py
import datasets
the_dataset = datasets.load_dataset("openai/openai_humaneval")
the_dataset = the_dataset.map(lambda item: item, num_proc=2)
EOF
```
2. Start Python REPL:
```bash
uv run --python 3.12.3 --with "datasets==4.0.0" python3
Python 3.12.3 (main, Aug 14 2025, 17:47:21) [GCC 13.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
```
3. Import the file:
```python
import import_me
````
Observe hang.
### Expected behavior
Ideally would not hang, or would fallback to num_proc=1 with a warning.
### Environment info
- `datasets` version: 4.0.0
- Platform: Linux-6.14.0-29-generic-x86_64-with-glibc2.39
- Python version: 3.12.3
- `huggingface_hub` version: 0.34.4
- PyArrow version: 21.0.0
- Pandas version: 2.3.2
- `fsspec` version: 2025.3.0
|
OPEN
| 2025-09-05T10:32:01
| 2025-09-05T10:32:01
| null |
https://github.com/huggingface/datasets/issues/7756
|
arjunguha
| 0
|
[] |
7,753
|
datasets massively slows data reads, even in memory
|
### Describe the bug
Loading image data in a huggingface dataset results in very slow read speeds, approximately 1000 times longer than reading the same data from a pytorch dataset. This applies even when the dataset is loaded into RAM using a `keep_in_memory=True` flag.
The following script reproduces the result with random data, but it applies equally to datasets that are loaded from the hub.
### Steps to reproduce the bug
The following script should reproduce the behavior
```
import torch
import time
from datasets import Dataset
images = torch.randint(0, 255, (1000, 3, 224, 224), dtype=torch.uint8)
labels = torch.randint(0, 200, (1000,), dtype=torch.uint8)
pt_dataset = torch.utils.data.TensorDataset(images, labels)
hf_dataset = Dataset.from_dict({'image': images, 'label':labels})
hf_dataset.set_format('torch', dtype=torch.uint8)
hf_in_memory = hf_dataset.map(lambda x: x, keep_in_memory=True)
# measure access speeds
def time_access(dataset, img_col):
start_time = time.time()
for i in range(1000):
_ = dataset[i][img_col].shape
end_time = time.time()
return end_time - start_time
print(f"In-memory Tensor access: {time_access(pt_dataset, 0):.4f} seconds")
print(f"HF Dataset access: {time_access(hf_dataset, 'image'):.4f} seconds")
print(f"In-memory HF Dataset access: {time_access(hf_in_memory, 'image'):.4f} seconds")
```
### Expected behavior
For me, the above script produces
```
In-memory Tensor access: 0.0025 seconds
HF Dataset access: 2.9317 seconds
In-memory HF Dataset access: 2.8082 seconds
```
I think that this difference is larger than expected.
### Environment info
- `datasets` version: 4.0.0
- Platform: macOS-14.7.7-arm64-arm-64bit
- Python version: 3.12.11
- `huggingface_hub` version: 0.34.3
- PyArrow version: 18.0.0
- Pandas version: 2.2.3
- `fsspec` version: 2024.9.0
|
OPEN
| 2025-09-04T01:45:24
| 2025-09-18T22:08:51
| null |
https://github.com/huggingface/datasets/issues/7753
|
lrast
| 2
|
[] |
7,751
|
Dill version update
|
### Describe the bug
Why the datasets is not updating the dill ?
Just want to know if I update the dill version in dill what will be the repucssion.
For now in multiplaces I have to update the library like process requirequire dill 0.4.0 so why not datasets.
Adding a pr too.
### Steps to reproduce the bug
.
### Expected behavior
.
### Environment info
.
|
OPEN
| 2025-08-27T07:38:30
| 2025-09-10T14:24:02
| null |
https://github.com/huggingface/datasets/issues/7751
|
Navanit-git
| 2
|
[] |
7,746
|
Fix: Canonical 'multi_news' dataset is broken and should be updated to a Parquet version
|
Hi,
The canonical `multi_news` dataset is currently broken and fails to load. This is because it points to the [alexfabri/multi_news](https://huggingface.co/datasets/alexfabbri/multi_news) repository, which contains a legacy loading script (`multi_news.py`) that requires the now-removed `trust_remote_code` parameter.
The original maintainer's GitHub and Hugging Face repositories appear to be inactive, so a community-led fix is needed.
I have created a working fix by converting the dataset to the modern Parquet format, which does not require a loading script. The fixed version is available here and loads correctly:
**[Awesome075/multi_news_parquet](https://huggingface.co/datasets/Awesome075/multi_news_parquet)**
Could the maintainers please guide me or themselves update the official `multi_news` dataset to use this working Parquet version? This would involve updating the canonical pointer for "multi_news" to resolve to the new repository.
This action would fix the dataset for all users and ensure its continued availability.
Thank you!
|
OPEN
| 2025-08-22T12:52:03
| 2025-08-27T20:23:35
| null |
https://github.com/huggingface/datasets/issues/7746
|
Awesome075
| 1
|
[] |
7,745
|
Audio mono argument no longer supported, despite class documentation
|
### Describe the bug
Either update the documentation, or re-introduce the flag (and corresponding logic to convert the audio to mono)
### Steps to reproduce the bug
Audio(sampling_rate=16000, mono=True) raises the error
TypeError: Audio.__init__() got an unexpected keyword argument 'mono'
However, in the class documentation, is says:
Args:
sampling_rate (`int`, *optional*):
Target sampling rate. If `None`, the native sampling rate is used.
mono (`bool`, defaults to `True`):
Whether to convert the audio signal to mono by averaging samples across
channels.
[...]
### Expected behavior
The above call should either work, or the documentation within the Audio class should be updated
### Environment info
- `datasets` version: 4.0.0
- Platform: Linux-5.15.0-124-generic-x86_64-with-glibc2.35
- Python version: 3.12.11
- `huggingface_hub` version: 0.34.4
- PyArrow version: 21.0.0
- Pandas version: 2.3.2
- `fsspec` version: 2025.3.0
|
OPEN
| 2025-08-22T12:15:41
| 2025-08-24T18:22:41
| null |
https://github.com/huggingface/datasets/issues/7745
|
jheitz
| 1
|
[] |
7,744
|
dtype: ClassLabel is not parsed correctly in `features.py`
|
`dtype: ClassLabel` in the README.md yaml metadata is parsed incorrectly and causes the data viewer to fail.
This yaml in my metadata ([source](https://huggingface.co/datasets/BrentLab/yeast_genome_resources/blob/main/README.md), though i changed `ClassLabel` to `string` to using different dtype in order to avoid the error):
```yaml
license: mit
pretty_name: BrentLab Yeast Genome Resources
size_categories:
- 1K<n<10K
language:
- en
dataset_info:
features:
- name: start
dtype: int32
description: Start coordinate (1-based, **inclusive**)
- name: end
dtype: int32
description: End coordinate (1-based, **inclusive**)
- name: strand
dtype: ClassLabel
...
```
is producing the following error in the data viewer:
```
Error code: ConfigNamesError
Exception: ValueError
Message: Feature type 'Classlabel' not found. Available feature types: ['Value', 'ClassLabel', 'Translation', 'TranslationVariableLanguages', 'LargeList', 'List', 'Array2D', 'Array3D', 'Array4D', 'Array5D', 'Audio', 'Image', 'Video', 'Pdf']
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/dataset/config_names.py", line 66, in compute_config_names_response
config_names = get_dataset_config_names(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 161, in get_dataset_config_names
dataset_module = dataset_module_factory(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1031, in dataset_module_factory
raise e1 from None
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 996, in dataset_module_factory
return HubDatasetModuleFactory(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 605, in get_module
dataset_infos = DatasetInfosDict.from_dataset_card_data(dataset_card_data)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/info.py", line 386, in from_dataset_card_data
dataset_info = DatasetInfo._from_yaml_dict(dataset_card_data["dataset_info"])
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/info.py", line 317, in _from_yaml_dict
yaml_data["features"] = Features._from_yaml_list(yaml_data["features"])
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 2027, in _from_yaml_list
return cls.from_dict(from_yaml_inner(yaml_data))
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 1872, in from_dict
obj = generate_from_dict(dic)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 1459, in generate_from_dict
return {key: generate_from_dict(value) for key, value in obj.items()}
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 1459, in <dictcomp>
return {key: generate_from_dict(value) for key, value in obj.items()}
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 1465, in generate_from_dict
raise ValueError(f"Feature type '{_type}' not found. Available feature types: {list(_FEATURE_TYPES.keys())}")
ValueError: Feature type 'Classlabel' not found. Available feature types: ['Value', 'ClassLabel', 'Translation', 'TranslationVariableLanguages', 'LargeList', 'List', 'Array2D', 'Array3D', 'Array4D', 'Array5D', 'Audio', 'Image', 'Video', 'Pdf']
```
I think that this is caused by this line
https://github.com/huggingface/datasets/blob/896616c6cb03d92a33248c3529b0796cda27e955/src/datasets/features/features.py#L2013
Reproducible example from [naming.py](https://github.com/huggingface/datasets/blob/896616c6cb03d92a33248c3529b0796cda27e955/src/datasets/naming.py)
```python
import itertools
import os
import re
_uppercase_uppercase_re = re.compile(r"([A-Z]+)([A-Z][a-z])")
_lowercase_uppercase_re = re.compile(r"([a-z\d])([A-Z])")
_single_underscore_re = re.compile(r"(?<!_)_(?!_)")
_multiple_underscores_re = re.compile(r"(_{2,})")
_split_re = r"^\w+(\.\w+)*$"
def snakecase_to_camelcase(name):
"""Convert snake-case string to camel-case string."""
name = _single_underscore_re.split(name)
name = [_multiple_underscores_re.split(n) for n in name]
return "".join(n.capitalize() for n in itertools.chain.from_iterable(name) if n != "")
snakecase_to_camelcase("ClassLabel")
```
Result:
```raw
'Classlabel'
```
|
CLOSED
| 2025-08-21T23:28:50
| 2025-09-10T15:23:41
| 2025-09-10T15:23:41
|
https://github.com/huggingface/datasets/issues/7744
|
cmatKhan
| 3
|
[] |
7,742
|
module 'pyarrow' has no attribute 'PyExtensionType'
|
### Describe the bug
When importing certain libraries, users will encounter the following error which can be traced back to the datasets library.
module 'pyarrow' has no attribute 'PyExtensionType'.
Example issue: https://github.com/explodinggradients/ragas/issues/2170
The issue occurs due to the following. I will proceed to submit a PR with the below fix:
**Issue Reason**
The issue is that PyArrow version 21.0.0 doesn’t have PyExtensionType. This was changed in newer versions of PyArrow. The
PyExtensionType class was renamed to ExtensionType in PyArrow 13.0.0 and later versions.
** Issue Solution**
Making the following changes to the following lib files should temporarily resolve the issue.
I will submit a PR to the dataets library in the meantime.
env_name/lib/python3.10/site-packages/datasets/features/features.py:
```
> 521 self.shape = tuple(shape)
522 self.value_type = dtype
523 self.storage_dtype = self._generate_dtype(self.value_type)
524 - pa.PyExtensionType.__init__(self, self.storage_dtype)
524 + pa.ExtensionType.__init__(self, self.storage_dtype)
525
526 def __reduce__(self):
527 return self.__class__, (
```
Updated venv_name/lib/python3.10/site-packages/datasets/features/features.py:
```
510 _type: str = field(default=“Array5D”, init=False, repr=False)
511
512
513 - class _ArrayXDExtensionType(pa.PyExtensionType):
513 + class _ArrayXDExtensionType(pa.ExtensionType):
514 ndims: Optional[int] = None
515
516 def __init__(self, shape: tuple, dtype: str):
```
### Steps to reproduce the bug
Ragas version: 0.3.1
Python version: 3.11
**Code to Reproduce**
_**In notebook:**_
!pip install ragas
from ragas import evaluate
### Expected behavior
The required package installs without issue.
### Environment info
In Jupyter Notebook.
venv
|
OPEN
| 2025-08-20T06:14:33
| 2025-09-09T02:51:46
| null |
https://github.com/huggingface/datasets/issues/7742
|
mnedelko
| 2
|
[] |
7,741
|
Preserve tree structure when loading HDF5
|
### Feature request
https://github.com/huggingface/datasets/pull/7740#discussion_r2285605374
### Motivation
`datasets` has the `Features` class for representing nested features. HDF5 files have groups of datasets which are nested, though in #7690 the keys are flattened. We should preserve that structure for the user.
### Your contribution
I'll open a PR (#7743)
|
CLOSED
| 2025-08-19T15:42:05
| 2025-08-26T15:28:06
| 2025-08-26T15:28:06
|
https://github.com/huggingface/datasets/issues/7741
|
klamike
| 0
|
[
"enhancement"
] |
7,739
|
Replacement of "Sequence" feature with "List" breaks backward compatibility
|
PR #7634 replaced the Sequence feature with List in 4.0.0, so datasets saved with version 4.0.0 with that feature cannot be loaded by earlier versions. There is no clear option in 4.0.0 to use the legacy feature type to preserve backward compatibility.
Why is this a problem? I have a complex preprocessing and training pipeline dependent on 3.6.0; we manage a very large number of separate datasets that get concatenated during training. If just one of those datasets is saved with 4.0.0, they become unusable, and we have no way of "fixing" them. I can load them in 4.0.0 but I can't re-save with the legacy feature type, and I can't load it in 3.6.0 for obvious reasons.
Perhaps I'm missing something here, since the PR says that backward compatibility is preserved; if so, it's not obvious to me how.
|
OPEN
| 2025-08-18T17:28:38
| 2025-09-10T14:17:50
| null |
https://github.com/huggingface/datasets/issues/7739
|
evmaki
| 1
|
[] |
7,738
|
Allow saving multi-dimensional ndarray with dynamic shapes
|
### Feature request
I propose adding a dedicated feature to the datasets library that allows for the efficient storage and retrieval of multi-dimensional ndarray with dynamic shapes. Similar to how Image columns handle variable-sized images, this feature would provide a structured way to store array data where the dimensions are not fixed.
A possible implementation could be a new Array or Tensor feature type that stores the data in a structured format, for example,
```python
{
"shape": (5, 224, 224),
"dtype": "uint8",
"data": [...]
}
```
This would allow the datasets library to handle heterogeneous array sizes within a single column without requiring a fixed shape definition in the feature schema.
### Motivation
I am currently trying to upload data from astronomical telescopes, specifically FITS files, to the Hugging Face Hub. This type of data is very similar to images but often has more than three dimensions. For example, data from the SDSS project contains five channels (u, g, r, i, z), and the pixel values can exceed 255, making the Pillow based Image feature unsuitable.
The current datasets library requires a fixed shape to be defined in the feature schema for multi-dimensional arrays, which is a major roadblock. This prevents me from saving my data, as the dimensions of the arrays can vary across different FITS files.
https://github.com/huggingface/datasets/blob/985c9bee6bfc345787a8b9dd316e1d4f3b930503/src/datasets/features/features.py#L613-L614
A feature that supports dynamic shapes would be incredibly beneficial for the astronomy community and other fields dealing with similar high-dimensional, variable-sized data (e.g., medical imaging, scientific simulations).
### Your contribution
I am willing to create a PR to help implement this feature if the proposal is accepted.
|
OPEN
| 2025-08-18T02:23:51
| 2025-08-26T15:25:02
| null |
https://github.com/huggingface/datasets/issues/7738
|
ryan-minato
| 2
|
[
"enhancement"
] |
7,733
|
Dataset Repo Paths to Locally Stored Images Not Being Appended to Image Path
|
### Describe the bug
I’m not sure if this is a bug or a feature and I just don’t fully understand how dataset loading is to work, but it appears there may be a bug with how locally stored Image() are being accessed. I’ve uploaded a new dataset to hugging face (rmdig/rocky_mountain_snowpack) but I’ve come into a ton of trouble trying to have the images handled properly (at least in the way I’d expect them to be handled).
I find that I cannot use relative paths for loading images remotely from the Hugging Face repo or from a local repository. Any time I do it always simply appends my current working directory to the dataset. As a result to use the datasets library with my dataset I have to change my working directory to the dataset folder or abandon the dataset object structure, which I cannot imagine you intended. As a result I have to use URL’s since an absolute path on my system obviously wouldn’t work for others. The URL works ok, but despite me having it locally downloaded, it appears to be redownloading the dataset every time I train my snowGAN model on it (and often times I’m coming into HTTPS errors for over requesting the data).
Or maybe image relative paths aren't intended to be loaded directly through your datasets library as images and should be kept as strings for the user to handle? If so I feel like you’re missing out on some pretty seamless functionality
### Steps to reproduce the bug
1. Download a local copy of the dataset (rmdig/rocky_mountain_snowpack) through git or whatever you prefer.
2. Alter the README.md YAML for file_path (the relative path to each image) to be type Image instead of type string
`
---
dataset_info:
features:
- name: image
dtype: Image
- name: file_path
dtype: Image
`
3. Initialize the dataset locally, make sure your working directory is not the dataset directory root
`dataset = datasets.load_dataset(‘path/to/local/rocky_mountain_snowpack/‘)`
4. Call to one of the samples and you’ll get an error that the image was not found in current/working/directory/preprocessed/cores/image_1.png. Showing that it’s simply looking in the current working directory + relative path
`
>>> dataset['train'][0]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/dennyschaedig/miniconda3/lib/python3.12/site-packages/datasets/arrow_dataset.py", line 2859, in __getitem__
return self._getitem(key)
^^^^^^^^^^^^^^^^^^
File "/Users/dennyschaedig/miniconda3/lib/python3.12/site-packages/datasets/arrow_dataset.py", line 2841, in _getitem
formatted_output = format_table(
^^^^^^^^^^^^^
File "/Users/dennyschaedig/miniconda3/lib/python3.12/site-packages/datasets/formatting/formatting.py", line 657, in format_table
return formatter(pa_table, query_type=query_type)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/dennyschaedig/miniconda3/lib/python3.12/site-packages/datasets/formatting/formatting.py", line 410, in __call__
return self.format_row(pa_table)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/dennyschaedig/miniconda3/lib/python3.12/site-packages/datasets/formatting/formatting.py", line 459, in format_row
row = self.python_features_decoder.decode_row(row)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/dennyschaedig/miniconda3/lib/python3.12/site-packages/datasets/formatting/formatting.py", line 223, in decode_row
return self.features.decode_example(row, token_per_repo_id=self.token_per_repo_id) if self.features else row
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/dennyschaedig/miniconda3/lib/python3.12/site-packages/datasets/features/features.py", line 2093, in decode_example
column_name: decode_nested_example(feature, value, token_per_repo_id=token_per_repo_id)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/dennyschaedig/miniconda3/lib/python3.12/site-packages/datasets/features/features.py", line 1405, in decode_nested_example
return schema.decode_example(obj, token_per_repo_id=token_per_repo_id) if obj is not None else None
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/dennyschaedig/miniconda3/lib/python3.12/site-packages/datasets/features/image.py", line 171, in decode_example
image = PIL.Image.open(path)
^^^^^^^^^^^^^^^^^^^^
File "/Users/dennyschaedig/miniconda3/lib/python3.12/site-packages/PIL/Image.py", line 3277, in open
fp = builtins.open(filename, "rb")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
FileNotFoundError: [Errno 2] No such file or directory: '/Users/dennyschaedig/Datasets/preprocessed/cores/image_1.png'
`
### Expected behavior
I expect the datasets and Image() to load the locally hosted data using path/to/local/rocky_mountain_snowpack/ (that I pass in with my datasets.load_dataset() or the you all handle on the backend) call + relative path.
Instead it appears to load from my current working directory + relative path.
### Environment info
Tested on…
Windows 11, Ubuntu Linux 22.04 and Mac Sequoia 15.5 Silicone M2
datasets version 4.0.0
Python 3.12 and 3.13
|
CLOSED
| 2025-08-08T19:10:58
| 2025-10-07T04:47:36
| 2025-10-07T04:32:48
|
https://github.com/huggingface/datasets/issues/7733
|
dennys246
| 2
|
[] |
7,732
|
webdataset: key errors when `field_name` has upper case characters
|
### Describe the bug
When using a webdataset each sample can be a collection of different "fields"
like this:
```
images17/image194.left.jpg
images17/image194.right.jpg
images17/image194.json
images17/image12.left.jpg
images17/image12.right.jpg
images17/image12.json
```
if the field_name contains upper case characters, the HF webdataset integration throws a key error when trying to load the dataset:
e.g. from a dataset (now updated so that it doesn't throw this error)
```
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
Cell In[1], line 2
1 from datasets import load_dataset
----> 2 ds = load_dataset("commaai/comma2k19", data_files={'train': ['data-00000.tar.gz']}, num_proc=1)
File ~/xx/.venv/lib/python3.11/site-packages/datasets/load.py:1412, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, keep_in_memory, save_infos, revision, token, streaming, num_proc, storage_options, **config_kwargs)
1409 return builder_instance.as_streaming_dataset(split=split)
1411 # Download and prepare data
-> 1412 builder_instance.download_and_prepare(
1413 download_config=download_config,
1414 download_mode=download_mode,
1415 verification_mode=verification_mode,
1416 num_proc=num_proc,
1417 storage_options=storage_options,
1418 )
1420 # Build dataset for splits
1421 keep_in_memory = (
1422 keep_in_memory if keep_in_memory is not None else is_small_dataset(builder_instance.info.dataset_size)
1423 )
File ~/xx/.venv/lib/python3.11/site-packages/datasets/builder.py:894, in DatasetBuilder.download_and_prepare(self, output_dir, download_config, download_mode, verification_mode, dl_manager, base_path, file_format, max_shard_size, num_proc, storage_options, **download_and_prepare_kwargs)
892 if num_proc is not None:
893 prepare_split_kwargs["num_proc"] = num_proc
--> 894 self._download_and_prepare(
895 dl_manager=dl_manager,
896 verification_mode=verification_mode,
897 **prepare_split_kwargs,
898 **download_and_prepare_kwargs,
899 )
900 # Sync info
901 self.info.dataset_size = sum(split.num_bytes for split in self.info.splits.values())
File ~/xx/.venv/lib/python3.11/site-packages/datasets/builder.py:1609, in GeneratorBasedBuilder._download_and_prepare(self, dl_manager, verification_mode, **prepare_splits_kwargs)
1608 def _download_and_prepare(self, dl_manager, verification_mode, **prepare_splits_kwargs):
-> 1609 super()._download_and_prepare(
1610 dl_manager,
1611 verification_mode,
1612 check_duplicate_keys=verification_mode == VerificationMode.BASIC_CHECKS
1613 or verification_mode == VerificationMode.ALL_CHECKS,
1614 **prepare_splits_kwargs,
1615 )
File ~/xx/.venv/lib/python3.11/site-packages/datasets/builder.py:948, in DatasetBuilder._download_and_prepare(self, dl_manager, verification_mode, **prepare_split_kwargs)
946 split_dict = SplitDict(dataset_name=self.dataset_name)
947 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs)
--> 948 split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
950 # Checksums verification
951 if verification_mode == VerificationMode.ALL_CHECKS and dl_manager.record_checksums:
File ~/xx/.venv/lib/python3.11/site-packages/datasets/packaged_modules/webdataset/webdataset.py:81, in WebDataset._split_generators(self, dl_manager)
78 if not self.info.features:
79 # Get one example to get the feature types
80 pipeline = self._get_pipeline_from_tar(tar_paths[0], tar_iterators[0])
---> 81 first_examples = list(islice(pipeline, self.NUM_EXAMPLES_FOR_FEATURES_INFERENCE))
82 if any(example.keys() != first_examples[0].keys() for example in first_examples):
83 raise ValueError(
84 "The TAR archives of the dataset should be in WebDataset format, "
85 "but the files in the archive don't share the same prefix or the same types."
86 )
File ~/xx/.venv/lib/python3.11/site-packages/datasets/packaged_modules/webdataset/webdataset.py:55, in WebDataset._get_pipeline_from_tar(cls, tar_path, tar_iterator)
53 data_extension = field_name.split(".")[-1]
54 if data_extension in cls.DECODERS:
---> 55 current_example[field_name] = cls.DECODERS[data_extension](current_example[field_name])
56 if current_example:
57 yield current_example
KeyError: 'processed_log_IMU_magnetometer_value.npy'
```
### Steps to reproduce the bug
unit test was added in: https://github.com/huggingface/datasets/pull/7726
it fails without the fixed proposed in the same PR
### Expected behavior
Not throwing a key error.
### Environment info
```
- `datasets` version: 4.0.0
- Platform: Linux-6.8.0-51-generic-x86_64-with-glibc2.39
- Python version: 3.11.4
- `huggingface_hub` version: 0.33.4
- PyArrow version: 21.0.0
- Pandas version: 2.3.1
- `fsspec` version: 2025.7.0
```
|
OPEN
| 2025-08-08T16:56:42
| 2025-08-08T16:56:42
| null |
https://github.com/huggingface/datasets/issues/7732
|
YassineYousfi
| 0
|
[] |
7,731
|
Add the possibility of a backend for audio decoding
|
### Feature request
Add the possibility of a backend for audio decoding. Before version 4.0.0, soundfile was used, and now torchcodec is used, but the problem is that torchcodec requires ffmpeg, which is problematic to install on the same colab. Therefore, I suggest adding a decoder selection when loading the dataset.
### Motivation
I use a service for training models in which ffmpeg cannot be installed.
### Your contribution
I use a service for training models in which ffmpeg cannot be installed.
|
OPEN
| 2025-08-08T11:08:56
| 2025-08-20T16:29:33
| null |
https://github.com/huggingface/datasets/issues/7731
|
intexcor
| 2
|
[
"enhancement"
] |
7,729
|
OSError: libcudart.so.11.0: cannot open shared object file: No such file or directory
|
> Hi is there any solution for that eror i try to install this one
pip install torch==1.12.1+cpu torchaudio==0.12.1+cpu -f https://download.pytorch.org/whl/torch_stable.html
this is working fine but tell me how to install pytorch version that is fit for gpu
|
OPEN
| 2025-08-07T14:07:23
| 2025-09-24T02:17:15
| null |
https://github.com/huggingface/datasets/issues/7729
|
SaleemMalikAI
| 1
|
[] |
7,728
|
NonMatchingSplitsSizesError and ExpectedMoreSplitsError
|
### Describe the bug
When loading dataset, the info specified by `data_files` did not overwrite the original info.
### Steps to reproduce the bug
```python
from datasets import load_dataset
traindata = load_dataset(
"allenai/c4",
"en",
data_files={"train": "en/c4-train.00000-of-01024.json.gz",
"validation": "en/c4-validation.00000-of-00008.json.gz"},
)
```
```log
NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=828589180707, num_examples=364868892, shard_lengths=None, dataset_name=None), 'recorded': SplitInfo(name='train', num_bytes=809262831, num_examples=356317, shard_lengths=[223006, 133311], dataset_name='c4')}, {'expected': SplitInfo(name='validation', num_bytes=825767266, num_examples=364608, shard_lengths=None, dataset_name=None), 'recorded': SplitInfo(name='validation', num_bytes=102199431, num_examples=45576, shard_lengths=None, dataset_name='c4')}]
```
```python
from datasets import load_dataset
traindata = load_dataset(
"allenai/c4",
"en",
data_files={"train": "en/c4-train.00000-of-01024.json.gz"},
split="train"
)
```
```log
ExpectedMoreSplitsError: {'validation'}
```
### Expected behavior
No error
### Environment info
datasets 4.0.0
|
OPEN
| 2025-08-07T04:04:50
| 2025-10-06T21:08:39
| null |
https://github.com/huggingface/datasets/issues/7728
|
efsotr
| 3
|
[] |
7,727
|
config paths that start with ./ are not valid as hf:// accessed repos, but are valid when accessed locally
|
### Describe the bug
```
- config_name: some_config
data_files:
- split: train
path:
- images/xyz/*.jpg
```
will correctly download but
```
- config_name: some_config
data_files:
- split: train
path:
- ./images/xyz/*.jpg
```
will error with `FileNotFoundError` due to improper url joining. `load_dataset` on the same directory locally works fine.
### Steps to reproduce the bug
1. create a README.md with the front matter of the form
```
- config_name: some_config
data_files:
- split: train
path:
- ./images/xyz/*.jpg
```
2. `touch ./images/xyz/1.jpg`
3. Observe this directory loads with `load_dataset("filesystem_path", "some_config")` correctly.
4. Observe exceptions when you load this with `load_dataset("repoid/filesystem_path", "some_config")`
### Expected behavior
`./` prefix should be interpreted correctly
### Environment info
datasets 4.0.0
datasets 3.4.0
reproduce
|
OPEN
| 2025-08-06T08:21:37
| 2025-08-06T08:21:37
| null |
https://github.com/huggingface/datasets/issues/7727
|
doctorpangloss
| 0
|
[] |
7,724
|
Can not stepinto load_dataset.py?
|
I set a breakpoint in "load_dataset.py" and try to debug my data load codes, but it does not stop at any breakpoints, so "load_dataset.py" can not be stepped into ?
<!-- Failed to upload "截图 2025-08-05 17-25-18.png" -->
|
OPEN
| 2025-08-05T09:28:51
| 2025-08-05T09:28:51
| null |
https://github.com/huggingface/datasets/issues/7724
|
micklexqg
| 0
|
[] |
7,723
|
Don't remove `trust_remote_code` arg!!!
|
### Feature request
defaulting it to False is nice balance. we need manully setting it to True in certain scenarios!
Add `trust_remote_code` arg back please!
### Motivation
defaulting it to False is nice balance. we need manully setting it to True in certain scenarios!
### Your contribution
defaulting it to False is nice balance. we need manully setting it to True in certain scenarios!
|
OPEN
| 2025-08-04T15:42:07
| 2025-08-04T15:42:07
| null |
https://github.com/huggingface/datasets/issues/7723
|
autosquid
| 0
|
[
"enhancement"
] |
7,722
|
Out of memory even though using load_dataset(..., streaming=True)
|
### Describe the bug
I am iterating over a large dataset that I load using streaming=True to avoid running out of memory. Unfortunately, I am observing that memory usage increases over time and I'm finally running in an oom.
### Steps to reproduce the bug
```
ds = load_dataset("openslr/librispeech_asr", split="train.clean.360", streaming=True)
for i,sample in enumerate(tqdm(ds)):
target_file = os.path.join(NSFW_TARGET_FOLDER, f'audio{i}.wav')
try:
sf.write(target_file, sample['audio']['array'], samplerate=sample['audio']['sampling_rate'])
except Exception as e:
print(f"Could not write audio {i} in ds: {e}")
```
### Expected behavior
I'd expect to have a small memory footprint and memory being freed after each iteration of the for loop. Instead the memory usage is increasing. I tried to remove the logic to write the sound file and just print the sample but the issue remains the same.
### Environment info
Python 3.12.11
Ubuntu 24
datasets 4.0.0 and 3.6.0
|
OPEN
| 2025-08-04T14:41:55
| 2025-08-04T14:41:55
| null |
https://github.com/huggingface/datasets/issues/7722
|
padmalcom
| 0
|
[] |
7,721
|
Bad split error message when using percentages
|
### Describe the bug
Hi, I'm trying to download a dataset. To not load the entire dataset in memory, I split it as described [here](https://huggingface.co/docs/datasets/v4.0.0/loading#slice-splits) in 10% steps.
When doing so, the library returns this error:
raise ValueError(f"Bad split: {split}. Available splits: {list(splits_generators)}")
ValueError: Bad split: train[0%:10%]. Available splits: ['train']
Edit: Same happens with a split like _train[:90000]_
### Steps to reproduce the bug
```
for split in range(10):
split_str = f"train[{split*10}%:{(split+1)*10}%]"
print(f"Processing split {split_str}...")
ds = load_dataset("user/dataset", split=split_str, streaming=True)
```
### Expected behavior
I'd expect the library to split my dataset in 10% steps.
### Environment info
python 3.12.11
ubuntu 24
dataset 4.0.0
|
OPEN
| 2025-08-04T13:20:25
| 2025-08-14T14:42:24
| null |
https://github.com/huggingface/datasets/issues/7721
|
padmalcom
| 2
|
[] |
7,720
|
Datasets 4.0 map function causing column not found
|
### Describe the bug
Column returned after mapping is not found in new instance of the dataset.
### Steps to reproduce the bug
Code for reproduction. After running get_total_audio_length, it is errored out due to `data` not having `duration`
```
def compute_duration(x):
return {"duration": len(x["audio"]["array"]) / x["audio"]["sampling_rate"]}
def get_total_audio_length(dataset):
data = dataset.map(compute_duration, num_proc=NUM_PROC)
print(data)
durations=data["duration"]
total_seconds = sum(durations)
return total_seconds
```
### Expected behavior
New datasets.Dataset instance should have new columns attached.
### Environment info
- `datasets` version: 4.0.0
- Platform: Linux-5.4.0-124-generic-x86_64-with-glibc2.31
- Python version: 3.10.13
- `huggingface_hub` version: 0.33.2
- PyArrow version: 20.0.0
- Pandas version: 2.3.0
- `fsspec` version: 2023.12.2
|
OPEN
| 2025-08-03T12:52:34
| 2025-08-07T19:23:34
| null |
https://github.com/huggingface/datasets/issues/7720
|
Darejkal
| 3
|
[] |
7,719
|
Specify dataset columns types in typehint
|
### Feature request
Make dataset optionaly generic to datasets usage with type annotations like it was done in `torch.Dataloader` https://github.com/pytorch/pytorch/blob/134179474539648ba7dee1317959529fbd0e7f89/torch/utils/data/dataloader.py#L131
### Motivation
In MTEB we're using a lot of datasets objects, but they're a bit poor in typehints. E.g. we can specify this for dataloder
```python
from typing import TypedDict
from torch.utils.data import DataLoader
class CorpusInput(TypedDict):
title: list[str]
body: list[str]
class QueryInput(TypedDict):
query: list[str]
instruction: list[str]
def queries_loader() -> DataLoader[QueryInput]:
...
def corpus_loader() -> DataLoader[CorpusInput]:
...
```
But for datasets we can only specify columns in type in comments
```python
from datasets import Dataset
QueryDataset = Dataset
"""Query dataset should have `query` and `instructions` columns as `str` """
```
### Your contribution
I can create draft implementation
|
OPEN
| 2025-08-02T13:22:31
| 2025-08-02T13:22:31
| null |
https://github.com/huggingface/datasets/issues/7719
|
Samoed
| 0
|
[
"enhancement"
] |
7,717
|
Cached dataset is not used when explicitly passing the cache_dir parameter
|
### Describe the bug
Hi, we are pre-downloading a dataset using snapshot_download(). When loading this exact dataset with load_dataset() the cached snapshot is not used. In both calls, I provide the cache_dir parameter.
### Steps to reproduce the bug
```
from datasets import load_dataset, concatenate_datasets
from huggingface_hub import snapshot_download
def download_ds(name: str):
snapshot_download(repo_id=name, repo_type="dataset", cache_dir="G:/Datasets/cache")
def prepare_ds():
audio_ds = load_dataset("openslr/librispeech_asr", num_proc=4, cache_dir="G:/Datasets/cache")
print(sfw_ds.features)
if __name__ == '__main__':
download_ds("openslr/librispeech_asr")
prepare_ds()
```
### Expected behavior
I'd expect that the cached version of the dataset is used. Instead, the same dataset is downloaded again to the default cache directory.
### Environment info
Windows 11
datasets==4.0.0
Python 3.12.11
|
OPEN
| 2025-08-01T07:12:41
| 2025-08-05T19:19:36
| null |
https://github.com/huggingface/datasets/issues/7717
|
padmalcom
| 1
|
[] |
7,709
|
Release 4.0.0 breaks usage patterns of with_format
|
### Describe the bug
Previously it was possible to access a whole column that was e.g. in numpy format via `with_format` by indexing the column. Now this possibility seems to be gone with the new Column() class. As far as I see, this makes working on a whole column (in-memory) more complex, i.e. normalizing an in-memory dataset for which iterating would be too slow. Is this intended behaviour? I couldn't find much documentation on the intended usage of the new Column class yet.
### Steps to reproduce the bug
Steps to reproduce:
```
from datasets import load_dataset
dataset = load_dataset("lhoestq/demo1")
dataset = dataset.with_format("numpy")
print(dataset["star"].ndim)
```
### Expected behavior
Working on whole columns should be possible.
### Environment info
- `datasets` version: 4.0.0
- Platform: Linux-6.8.0-63-generic-x86_64-with-glibc2.36
- Python version: 3.12.11
- `huggingface_hub` version: 0.34.3
- PyArrow version: 21.0.0
- Pandas version: 2.3.1
- `fsspec` version: 2025.3.0
|
CLOSED
| 2025-07-30T11:34:53
| 2025-08-07T08:27:18
| 2025-08-07T08:27:18
|
https://github.com/huggingface/datasets/issues/7709
|
wittenator
| 2
|
[] |
7,707
|
load_dataset() in 4.0.0 failed when decoding audio
|
### Describe the bug
Cannot decode audio data.
### Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation")
print(dataset[0]["audio"]["array"])
```
1st round run, got
```
File "/usr/local/lib/python3.12/dist-packages/datasets/features/audio.py", line 172, in decode_example
raise ImportError("To support decoding audio data, please install 'torchcodec'.")
ImportError: To support decoding audio data, please install 'torchcodec'.
```
After `pip install torchcodec` and run, got
```
File "/usr/local/lib/python3.12/dist-packages/torchcodec/_core/_metadata.py", line 16, in <module>
from torchcodec._core.ops import (
File "/usr/local/lib/python3.12/dist-packages/torchcodec/_core/ops.py", line 84, in <module>
load_torchcodec_shared_libraries()
File "/usr/local/lib/python3.12/dist-packages/torchcodec/_core/ops.py", line 69, in load_torchcodec_shared_libraries
raise RuntimeError(
RuntimeError: Could not load libtorchcodec. Likely causes:
1. FFmpeg is not properly installed in your environment. We support
versions 4, 5, 6 and 7.
2. The PyTorch version (2.8.0a0+5228986c39.nv25.06) is not compatible with
this version of TorchCodec. Refer to the version compatibility
table:
https://github.com/pytorch/torchcodec?tab=readme-ov-file#installing-torchcodec.
3. Another runtime dependency; see exceptions below.
The following exceptions were raised as we tried to load libtorchcodec:
[start of libtorchcodec loading traceback]
FFmpeg version 7: libavutil.so.59: cannot open shared object file: No such file or directory
FFmpeg version 6: libavutil.so.58: cannot open shared object file: No such file or directory
FFmpeg version 5: libavutil.so.57: cannot open shared object file: No such file or directory
FFmpeg version 4: libavutil.so.56: cannot open shared object file: No such file or directory
[end of libtorchcodec loading traceback].
```
After `apt update && apt install ffmpeg -y`, got
```
Traceback (most recent call last):
File "/workspace/jiqing/test_datasets.py", line 4, in <module>
print(dataset[0]["audio"]["array"])
~~~~~~~^^^
File "/usr/local/lib/python3.12/dist-packages/datasets/arrow_dataset.py", line 2859, in __getitem__
return self._getitem(key)
^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/datasets/arrow_dataset.py", line 2841, in _getitem
formatted_output = format_table(
^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/datasets/formatting/formatting.py", line 657, in format_table
return formatter(pa_table, query_type=query_type)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/datasets/formatting/formatting.py", line 410, in __call__
return self.format_row(pa_table)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/datasets/formatting/formatting.py", line 459, in format_row
row = self.python_features_decoder.decode_row(row)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/datasets/formatting/formatting.py", line 223, in decode_row
return self.features.decode_example(row, token_per_repo_id=self.token_per_repo_id) if self.features else row
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/datasets/features/features.py", line 2093, in decode_example
column_name: decode_nested_example(feature, value, token_per_repo_id=token_per_repo_id)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/datasets/features/features.py", line 1405, in decode_nested_example
return schema.decode_example(obj, token_per_repo_id=token_per_repo_id) if obj is not None else None
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/datasets/features/audio.py", line 198, in decode_example
audio = AudioDecoder(bytes, stream_index=self.stream_index, sample_rate=self.sampling_rate)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torchcodec/decoders/_audio_decoder.py", line 62, in __init__
self._decoder = create_decoder(source=source, seek_mode="approximate")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torchcodec/decoders/_decoder_utils.py", line 33, in create_decoder
return core.create_from_bytes(source, seek_mode)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torchcodec/_core/ops.py", line 144, in create_from_bytes
return create_from_tensor(buffer, seek_mode)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/_ops.py", line 756, in __call__
return self._op(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
NotImplementedError: Could not run 'torchcodec_ns::create_from_tensor' with arguments from the 'CPU' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'torchcodec_ns::create_from_tensor' is only available for these backends: [Meta, BackendSelect, Python, FuncTorchDynamicLayerBackMode, Functionalize, Named, Conjugate, Negative, ZeroTensor, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, AutogradMPS, AutogradXPU, AutogradHPU, AutogradLazy, AutogradMTIA, AutogradMAIA, AutogradMeta, Tracer, AutocastCPU, AutocastMTIA, AutocastMAIA, AutocastXPU, AutocastMPS, AutocastCUDA, FuncTorchBatched, BatchedNestedTensor, FuncTorchVmapMode, Batched, VmapMode, FuncTorchGradWrapper, PythonTLSSnapshot, FuncTorchDynamicLayerFrontMode, PreDispatch, PythonDispatcher].
Meta: registered at /dev/null:214 [kernel]
BackendSelect: fallthrough registered at /opt/pytorch/pytorch/aten/src/ATen/core/BackendSelectFallbackKernel.cpp:3 [backend fallback]
Python: registered at /__w/torchcodec/torchcodec/pytorch/torchcodec/src/torchcodec/_core/custom_ops.cpp:694 [kernel]
FuncTorchDynamicLayerBackMode: registered at /opt/pytorch/pytorch/aten/src/ATen/functorch/DynamicLayer.cpp:479 [backend fallback]
Functionalize: registered at /opt/pytorch/pytorch/aten/src/ATen/FunctionalizeFallbackKernel.cpp:349 [backend fallback]
Named: registered at /opt/pytorch/pytorch/aten/src/ATen/core/NamedRegistrations.cpp:7 [backend fallback]
Conjugate: registered at /opt/pytorch/pytorch/aten/src/ATen/ConjugateFallback.cpp:17 [backend fallback]
Negative: registered at /opt/pytorch/pytorch/aten/src/ATen/native/NegateFallback.cpp:18 [backend fallback]
ZeroTensor: registered at /opt/pytorch/pytorch/aten/src/ATen/ZeroTensorFallback.cpp:86 [backend fallback]
ADInplaceOrView: fallthrough registered at /opt/pytorch/pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:104 [backend fallback]
AutogradOther: registered at /opt/pytorch/pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:63 [backend fallback]
AutogradCPU: registered at /opt/pytorch/pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:67 [backend fallback]
AutogradCUDA: registered at /opt/pytorch/pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:75 [backend fallback]
AutogradXLA: registered at /opt/pytorch/pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:87 [backend fallback]
AutogradMPS: registered at /opt/pytorch/pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:95 [backend fallback]
AutogradXPU: registered at /opt/pytorch/pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:71 [backend fallback]
AutogradHPU: registered at /opt/pytorch/pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:108 [backend fallback]
AutogradLazy: registered at /opt/pytorch/pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:91 [backend fallback]
AutogradMTIA: registered at /opt/pytorch/pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:79 [backend fallback]
AutogradMAIA: registered at /opt/pytorch/pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:83 [backend fallback]
AutogradMeta: registered at /opt/pytorch/pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:99 [backend fallback]
Tracer: registered at /opt/pytorch/pytorch/torch/csrc/autograd/TraceTypeManual.cpp:294 [backend fallback]
AutocastCPU: fallthrough registered at /opt/pytorch/pytorch/aten/src/ATen/autocast_mode.cpp:322 [backend fallback]
AutocastMTIA: fallthrough registered at /opt/pytorch/pytorch/aten/src/ATen/autocast_mode.cpp:466 [backend fallback]
AutocastMAIA: fallthrough registered at /opt/pytorch/pytorch/aten/src/ATen/autocast_mode.cpp:504 [backend fallback]
AutocastXPU: fallthrough registered at /opt/pytorch/pytorch/aten/src/ATen/autocast_mode.cpp:542 [backend fallback]
AutocastMPS: fallthrough registered at /opt/pytorch/pytorch/aten/src/ATen/autocast_mode.cpp:209 [backend fallback]
AutocastCUDA: fallthrough registered at /opt/pytorch/pytorch/aten/src/ATen/autocast_mode.cpp:165 [backend fallback]
FuncTorchBatched: registered at /opt/pytorch/pytorch/aten/src/ATen/functorch/LegacyBatchingRegistrations.cpp:731 [backend fallback]
BatchedNestedTensor: registered at /opt/pytorch/pytorch/aten/src/ATen/functorch/LegacyBatchingRegistrations.cpp:758 [backend fallback]
FuncTorchVmapMode: fallthrough registered at /opt/pytorch/pytorch/aten/src/ATen/functorch/VmapModeRegistrations.cpp:27 [backend fallback]
Batched: registered at /opt/pytorch/pytorch/aten/src/ATen/LegacyBatchingRegistrations.cpp:1075 [backend fallback]
VmapMode: fallthrough registered at /opt/pytorch/pytorch/aten/src/ATen/VmapModeRegistrations.cpp:33 [backend fallback]
FuncTorchGradWrapper: registered at /opt/pytorch/pytorch/aten/src/ATen/functorch/TensorWrapper.cpp:208 [backend fallback]
PythonTLSSnapshot: registered at /opt/pytorch/pytorch/aten/src/ATen/core/PythonFallbackKernel.cpp:202 [backend fallback]
FuncTorchDynamicLayerFrontMode: registered at /opt/pytorch/pytorch/aten/src/ATen/functorch/DynamicLayer.cpp:475 [backend fallback]
PreDispatch: registered at /opt/pytorch/pytorch/aten/src/ATen/core/PythonFallbackKernel.cpp:206 [backend fallback]
PythonDispatcher: registered at /opt/pytorch/pytorch/aten/src/ATen/core/PythonFallbackKernel.cpp:198 [backend fallback]
```
### Expected behavior
The result is
```
[0.00238037 0.0020752 0.00198364 ... 0.00042725 0.00057983 0.0010376 ]
```
on `datasets==3.6.0`
### Environment info
[NV official docker image](https://catalog.ngc.nvidia.com/orgs/nvidia/containers/pytorch): `nvcr.io/nvidia/pytorch:25.06-py3`
```
- `datasets` version: 4.0.0
- Platform: Linux-5.4.292-1.el8.elrepo.x86_64-x86_64-with-glibc2.39
- Python version: 3.12.3
- `huggingface_hub` version: 0.34.2
- PyArrow version: 19.0.1
- Pandas version: 2.2.3
- `fsspec` version: 2025.3.0
```
|
CLOSED
| 2025-07-29T03:25:03
| 2025-10-05T06:41:38
| 2025-08-01T05:15:45
|
https://github.com/huggingface/datasets/issues/7707
|
jiqing-feng
| 16
|
[] |
7,705
|
Can Not read installed dataset in dataset.load(.)
|
Hi, folks, I'm newbie in huggingface dataset api.
As title, i'm facing the issue that the dataset.load api can not connect to the installed dataset.
code snippet :
<img width="572" height="253" alt="Image" src="https://github.com/user-attachments/assets/10f48aaf-d6ca-4239-b1cf-145d74f125d1" />
data path :
"/xxx/joseph/llava_ds/vlm_ds"
it contains all video clips i want!
<img width="1398" height="261" alt="Image" src="https://github.com/user-attachments/assets/bf213b66-e344-4311-97e7-bc209677ae77" />
i run the py script by
<img width="1042" height="38" alt="Image" src="https://github.com/user-attachments/assets/8b3fcee4-e1a6-41b8-bee1-91567b00d9d2" />
But bad happended, even i provide dataset path by "HF_HUB_CACHE", it still attempt to download data from remote side :
<img width="1697" height="813" alt="Image" src="https://github.com/user-attachments/assets/baa6cff1-a724-4710-a8c4-4805459deffb" />
Any suggestion will be appreciated!!
|
OPEN
| 2025-07-28T09:43:54
| 2025-08-05T01:24:32
| null |
https://github.com/huggingface/datasets/issues/7705
|
HuangChiEn
| 3
|
[] |
7,703
|
[Docs] map() example uses undefined `tokenizer` — causes NameError
|
## Description
The current documentation example for `datasets.Dataset.map()` demonstrates batched processing but uses a `tokenizer` object without defining or importing it. This causes an error every time it's copied.
Here is the problematic line:
```python
# process a batch of examples
>>> ds = ds.map(lambda example: tokenizer(example["text"]), batched=True)
```
This assumes the user has already set up a tokenizer, which contradicts the goal of having self-contained, copy-paste-friendly examples.
## Problem
Users who copy and run the example as-is will encounter:
```python
NameError: name 'tokenizer' is not defined
```
This breaks the flow for users and violates HuggingFace's documentation principle that examples should "work as expected" when copied directly.
## Proposal
Update the example to include the required tokenizer setup using the Transformers library, like so:
```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
ds_tokenized = ds.map(lambda example: tokenizer(example["text"]), batched=True)
```
This will help new users understand the workflow and apply the method correctly.
## Note
This PR complements ongoing improvements like #7700, which clarifies multiprocessing in .map(). My change focuses on undefined tokenizer — causes NameError
|
OPEN
| 2025-07-26T13:35:11
| 2025-07-27T09:44:35
| null |
https://github.com/huggingface/datasets/issues/7703
|
Sanjaykumar030
| 1
|
[] |
7,700
|
[doc] map.num_proc needs clarification
|
https://huggingface.co/docs/datasets/v4.0.0/en/package_reference/main_classes#datasets.Dataset.map.num_proc
```
num_proc (int, optional, defaults to None) — Max number of processes when generating cache. Already cached
shards are loaded sequentially.
```
for batch:
```
num_proc (int, optional, defaults to None): The number of processes to use for multiprocessing. If None, no
multiprocessing is used. This can significantly speed up batching for large datasets.
```
So what happens to `map.num_proc` - is it the same behavior as `batch.num_proc` - so only if `num_proc=None` then no multiprocessing is used?
Let's update the doc to be unambiguous.
**bonus**: we could make all of these behave similarly to `DataLoader.num_workers` - where `num_workers==0` implies no multiprocessing. I think that's the most intuitive, IMHO. 0 workers - the main process has to do all the work. `None` could be the same as `0`.
context: debugging a failing `map`
Thank you!
|
OPEN
| 2025-07-25T17:35:09
| 2025-07-25T17:39:36
| null |
https://github.com/huggingface/datasets/issues/7700
|
sfc-gh-sbekman
| 0
|
[] |
7,699
|
Broken link in documentation for "Create a video dataset"
|
The link to "the [WebDataset documentation](https://webdataset.github.io/webdataset)." is broken.
https://huggingface.co/docs/datasets/main/en/video_dataset#webdataset
<img width="2048" height="264" alt="Image" src="https://github.com/user-attachments/assets/975dd10c-aad8-42fc-9fbc-de0e2747a326" />
|
OPEN
| 2025-07-24T19:46:28
| 2025-07-25T15:27:47
| null |
https://github.com/huggingface/datasets/issues/7699
|
cleong110
| 1
|
[] |
7,698
|
NotImplementedError when using streaming=True in Google Colab environment
|
### Describe the bug
When attempting to load a large dataset (like tiiuae/falcon-refinedweb or allenai/c4) using streaming=True in a standard Google Colab notebook, the process fails with a NotImplementedError: Loading a streaming dataset cached in a LocalFileSystem is not supported yet. This issue persists even after upgrading datasets and huggingface_hub and restarting the session.
### Steps to reproduce the bug
Open a new Google Colab notebook.
(Optional but recommended) Run !pip install --upgrade datasets huggingface_hub and restart the runtime.
Run the following code:
Python
from datasets import load_dataset
try:
print("Attempting to load a stream...")
streaming_dataset = load_dataset('tiiuae/falcon-refinedweb', streaming=True)
print("Success!")
except Exception as e:
print(e)
### Expected behavior
The load_dataset command should return a StreamingDataset object without raising an error, allowing iteration over the dataset.
Actual Behavior
The code fails and prints the following error traceback:
[PASTE THE FULL ERROR TRACEBACK HERE]
(Note: Copy the entire error message you received, from Traceback... to the final error line, and paste it in this section.)
### Environment info
Platform: Google Colab
datasets version: [Run !pip show datasets in Colab and paste the version here]
huggingface_hub version: [Run !pip show huggingface_hub and paste the version here]
Python version: [Run !python --version and paste the version here]
|
OPEN
| 2025-07-23T08:04:53
| 2025-07-23T15:06:23
| null |
https://github.com/huggingface/datasets/issues/7698
|
Aniket17200
| 2
|
[] |
7,697
|
-
|
-
|
CLOSED
| 2025-07-23T01:30:32
| 2025-07-25T15:21:39
| 2025-07-25T15:21:39
|
https://github.com/huggingface/datasets/issues/7697
| null | 0
|
[] |
7,696
|
load_dataset() in 4.0.0 returns different audio samples compared to earlier versions breaking reproducibility
|
### Describe the bug
In datasets 4.0.0 release, `load_dataset()` returns different audio samples compared to earlier versions, this breaks integration tests that depend on consistent sample data across different environments (first and second envs specified below).
### Steps to reproduce the bug
```python
from datasets import Audio, load_dataset
ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
ds = ds.cast_column("audio", Audio(24000))
sample= ds[0]["audio"]["array"]
print(sample)
# sample in 3.6.0
[0.00231914 0.00245417 0.00187414 ... 0.00061956 0.00101157 0.00076325]
# sample in 4.0.0
array([0.00238037, 0.00220794, 0.00198703, ..., 0.00057983, 0.00085863,
0.00115309], dtype=float32)
```
### Expected behavior
The same dataset should load identical samples across versions to maintain reproducibility.
### Environment info
First env:
- datasets version: 3.6.0
- Platform: Windows-10-10.0.26100-SP0
- Python: 3.11.0
Second env:
- datasets version: 4.0.0
- Platform: Linux-6.1.123+-x86_64-with-glibc2.35
- Python: 3.11.13
|
CLOSED
| 2025-07-22T17:02:17
| 2025-07-30T14:22:21
| 2025-07-30T14:22:21
|
https://github.com/huggingface/datasets/issues/7696
|
Manalelaidouni
| 2
|
[] |
7,694
|
Dataset.to_json consumes excessive memory, appears to not be a streaming operation
|
### Describe the bug
When exporting a Dataset object to a JSON Lines file using the .to_json(lines=True) method, the process consumes a very large amount of memory. The memory usage is proportional to the size of the entire Dataset object being saved, rather than being a low, constant memory operation.
This behavior is unexpected, as the JSONL format is line-oriented and ideally suited for streaming writes. This issue can easily lead to Out-of-Memory (OOM) errors when exporting large datasets, especially in memory-constrained environments like Docker containers.
<img width="1343" height="329" alt="Image" src="https://github.com/user-attachments/assets/518b4263-ad12-422d-9672-28ffe97240ce" />
### Steps to reproduce the bug
```
import os
from datasets import load_dataset, Dataset
from loguru import logger
# A public dataset to test with
REPO_ID = "adam89/TinyStoriesChinese"
SUBSET = "default"
SPLIT = "train"
NUM_ROWS_TO_LOAD = 10 # Use a reasonably large number to see the memory spike
def run_test():
"""Loads data into memory and then saves it, triggering the memory issue."""
logger.info("Step 1: Loading data into an in-memory Dataset object...")
# Create an in-memory Dataset object from a stream
# This simulates having a processed dataset ready to be saved
iterable_dataset = load_dataset(REPO_ID, name=SUBSET, split=SPLIT, streaming=True)
limited_stream = iterable_dataset.take(NUM_ROWS_TO_LOAD)
in_memory_dataset = Dataset.from_generator(limited_stream.__iter__)
logger.info(f"Dataset with {len(in_memory_dataset)} rows created in memory.")
output_path = "./test_output.jsonl"
logger.info(f"Step 2: Saving the dataset to {output_path} using .to_json()...")
logger.info("Please monitor memory usage during this step.")
# This is the step that causes the massive memory allocation
in_memory_dataset.to_json(output_path, force_ascii=False)
logger.info("Save operation complete.")
os.remove(output_path)
if __name__ == "__main__":
# To see the memory usage clearly, run this script with a memory profiler:
# python -m memray run your_script_name.py
# python -m memray tree xxx.bin
run_test()
```
### Expected behavior
I would expect the .to_json(lines=True) method to be a memory-efficient, streaming operation. The memory usage should remain low and relatively constant, as data is converted and written to the file line-by-line or in small batches. The memory footprint should not be proportional to the total number of rows in the in_memory_dataset.
### Environment info
datasets version:3.6.0
Python version:3.9.18
os:macOS 15.3.1 (arm64)
|
OPEN
| 2025-07-21T07:51:25
| 2025-07-25T14:42:21
| null |
https://github.com/huggingface/datasets/issues/7694
|
ycq0125
| 1
|
[] |
7,693
|
Dataset scripts are no longer supported, but found superb.py
|
### Describe the bug
Hello,
I'm trying to follow the [Hugging Face Pipelines tutorial](https://huggingface.co/docs/transformers/main_classes/pipelines) but the tutorial seems to work only on old datasets versions.
I then get the error :
```
--------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
Cell In[65], [line 1](vscode-notebook-cell:?execution_count=65&line=1)
----> [1](vscode-notebook-cell:?execution_count=65&line=1) dataset = datasets.load_dataset("superb", name="asr", split="test")
3 # KeyDataset (only *pt*) will simply return the item in the dict returned by the dataset item
4 # as we're not interested in the *target* part of the dataset. For sentence pair use KeyPairDataset
5 for out in tqdm(pipe(KeyDataset(dataset, "file"))):
File ~/Desktop/debug/llm_course/.venv/lib/python3.11/site-packages/datasets/load.py:1392, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, keep_in_memory, save_infos, revision, token, streaming, num_proc, storage_options, **config_kwargs)
1387 verification_mode = VerificationMode(
1388 (verification_mode or VerificationMode.BASIC_CHECKS) if not save_infos else VerificationMode.ALL_CHECKS
1389 )
1391 # Create a dataset builder
-> [1392](https://file+.vscode-resource.vscode-cdn.net/home/edwin/Desktop/debug/llm_course/~/Desktop/debug/llm_course/.venv/lib/python3.11/site-packages/datasets/load.py:1392) builder_instance = load_dataset_builder(
1393 path=path,
1394 name=name,
1395 data_dir=data_dir,
1396 data_files=data_files,
1397 cache_dir=cache_dir,
1398 features=features,
1399 download_config=download_config,
1400 download_mode=download_mode,
1401 revision=revision,
1402 token=token,
1403 storage_options=storage_options,
1404 **config_kwargs,
1405 )
1407 # Return iterable dataset in case of streaming
1408 if streaming:
File ~/Desktop/debug/llm_course/.venv/lib/python3.11/site-packages/datasets/load.py:1132, in load_dataset_builder(path, name, data_dir, data_files, cache_dir, features, download_config, download_mode, revision, token, storage_options, **config_kwargs)
1130 if features is not None:
1131 features = _fix_for_backward_compatible_features(features)
-> [1132](https://file+.vscode-resource.vscode-cdn.net/home/edwin/Desktop/debug/llm_course/~/Desktop/debug/llm_course/.venv/lib/python3.11/site-packages/datasets/load.py:1132) dataset_module = dataset_module_factory(
1133 path,
1134 revision=revision,
1135 download_config=download_config,
1136 download_mode=download_mode,
1137 data_dir=data_dir,
1138 data_files=data_files,
1139 cache_dir=cache_dir,
1140 )
1141 # Get dataset builder class
1142 builder_kwargs = dataset_module.builder_kwargs
File ~/Desktop/debug/llm_course/.venv/lib/python3.11/site-packages/datasets/load.py:1031, in dataset_module_factory(path, revision, download_config, download_mode, data_dir, data_files, cache_dir, **download_kwargs)
1026 if isinstance(e1, FileNotFoundError):
1027 raise FileNotFoundError(
1028 f"Couldn't find any data file at {relative_to_absolute_path(path)}. "
1029 f"Couldn't find '{path}' on the Hugging Face Hub either: {type(e1).__name__}: {e1}"
1030 ) from None
-> [1031](https://file+.vscode-resource.vscode-cdn.net/home/edwin/Desktop/debug/llm_course/~/Desktop/debug/llm_course/.venv/lib/python3.11/site-packages/datasets/load.py:1031) raise e1 from None
1032 else:
1033 raise FileNotFoundError(f"Couldn't find any data file at {relative_to_absolute_path(path)}.")
File ~/Desktop/debug/llm_course/.venv/lib/python3.11/site-packages/datasets/load.py:989, in dataset_module_factory(path, revision, download_config, download_mode, data_dir, data_files, cache_dir, **download_kwargs)
981 try:
982 api.hf_hub_download(
983 repo_id=path,
984 filename=filename,
(...) 987 proxies=download_config.proxies,
988 )
--> [989](https://file+.vscode-resource.vscode-cdn.net/home/edwin/Desktop/debug/llm_course/~/Desktop/debug/llm_course/.venv/lib/python3.11/site-packages/datasets/load.py:989) raise RuntimeError(f"Dataset scripts are no longer supported, but found {filename}")
990 except EntryNotFoundError:
991 # Use the infos from the parquet export except in some cases:
992 if data_dir or data_files or (revision and revision != "main"):
RuntimeError: Dataset scripts are no longer supported, but found superb.py
```
NB : I tried to replace "superb" by "anton-l/superb_demo" but I get a 'torchcodec' importing error. Maybe I misunderstood something.
### Steps to reproduce the bug
```
import datasets
from transformers import pipeline
from transformers.pipelines.pt_utils import KeyDataset
from tqdm.auto import tqdm
pipe = pipeline("automatic-speech-recognition", model="facebook/wav2vec2-base-960h", device=0)
dataset = datasets.load_dataset("superb", name="asr", split="test")
# KeyDataset (only *pt*) will simply return the item in the dict returned by the dataset item
# as we're not interested in the *target* part of the dataset. For sentence pair use KeyPairDataset
for out in tqdm(pipe(KeyDataset(dataset, "file"))):
print(out)
# {"text": "NUMBER TEN FRESH NELLY IS WAITING ON YOU GOOD NIGHT HUSBAND"}
# {"text": ....}
# ....
```
### Expected behavior
Get the tutorial expected results
### Environment info
--- SYSTEM INFO ---
Operating System: Ubuntu 24.10
Kernel: Linux 6.11.0-29-generic
Architecture: x86-64
--- PYTHON ---
Python 3.11.13
--- VENV INFO ----
datasets=4.0.0
transformers=4.53
tqdm=4.67.1
|
OPEN
| 2025-07-20T13:48:06
| 2025-12-02T05:34:39
| null |
https://github.com/huggingface/datasets/issues/7693
|
edwinzajac
| 19
|
[] |
7,692
|
xopen: invalid start byte for streaming dataset with trust_remote_code=True
|
### Describe the bug
I am trying to load YODAS2 dataset with datasets==3.6.0
```
from datasets import load_dataset
next(iter(load_dataset('espnet/yodas2', name='ru000', split='train', streaming=True, trust_remote_code=True)))
```
And get `UnicodeDecodeError: 'utf-8' codec can't decode byte 0xa8 in position 1: invalid start byte`
The cause of the error is the following:
```
from datasets.utils.file_utils import xopen
filepath = 'https://huggingface.co/datasets/espnet/yodas2/resolve/c9674490249665d658f527e2684848377108d82c/data/ru000/text/00000000.json'
xopen(filepath, 'r').read()
>>> UnicodeDecodeError: 'utf-8' codec can't decode byte 0xa8 in position 1: invalid start byte
```
And the cause of this is the following:
```
import fsspec
fsspec.open(
'hf://datasets/espnet/yodas2@c9674490249665d658f527e2684848377108d82c/data/ru000/text/00000000.json',
mode='r',
hf={'token': None, 'endpoint': 'https://huggingface.co'},
).open().read()
>>> UnicodeDecodeError: 'utf-8' codec can't decode byte 0xa8 in position 1: invalid start byte
```
Is it true that streaming=True loading is not supported anymore for trust_remote_code=True, even with datasets==3.6.0? This breaks backward compatibility.
### Steps to reproduce the bug
```
from datasets import load_dataset
next(iter(load_dataset('espnet/yodas2', name='ru000', split='train', streaming=True)))
```
### Expected behavior
No errors expected
### Environment info
datasets==3.6.0, ubuntu 24.04
|
OPEN
| 2025-07-20T11:08:20
| 2025-07-25T14:38:54
| null |
https://github.com/huggingface/datasets/issues/7692
|
sedol1339
| 1
|
[] |
End of preview. Expand
in Data Studio
README.md exists but content is empty.
- Downloads last month
- 7