Dataset Viewer
url
stringlengths 58
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 72
75
| comments_url
stringlengths 67
70
| events_url
stringlengths 65
68
| html_url
stringlengths 48
51
| id
int64 600M
3.09B
| node_id
stringlengths 18
24
| number
int64 2
7.59k
| title
stringlengths 1
290
| user
dict | labels
listlengths 0
4
| state
stringclasses 1
value | locked
bool 1
class | assignee
dict | assignees
listlengths 0
4
| milestone
dict | comments
sequencelengths 0
30
| created_at
timestamp[ns, tz=UTC]date 2020-04-14 18:18:51
2025-05-27 13:46:05
| updated_at
timestamp[ns, tz=UTC]date 2020-04-29 09:23:05
2025-06-09 22:00:16
| closed_at
timestamp[ns, tz=UTC]date 2020-04-29 09:23:05
2025-06-06 16:12:36
| author_association
stringclasses 4
values | type
float64 | active_lock_reason
float64 | sub_issues_summary
dict | body
stringlengths 0
228k
⌀ | closed_by
dict | reactions
dict | timeline_url
stringlengths 67
70
| performed_via_github_app
float64 | state_reason
stringclasses 3
values | draft
float64 | pull_request
null | time_to_close_hours
float64 0.01
28.8k
| __index_level_0__
int64 18
7.53k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/7588
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7588/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7588/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7588/events
|
https://github.com/huggingface/datasets/issues/7588
| 3,094,012,025 |
I_kwDODunzps64auB5
| 7,588 |
ValueError: Invalid pattern: '**' can only be an entire path component [Colab]
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/43061081?v=4",
"events_url": "https://api.github.com/users/wkambale/events{/privacy}",
"followers_url": "https://api.github.com/users/wkambale/followers",
"following_url": "https://api.github.com/users/wkambale/following{/other_user}",
"gists_url": "https://api.github.com/users/wkambale/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/wkambale",
"id": 43061081,
"login": "wkambale",
"node_id": "MDQ6VXNlcjQzMDYxMDgx",
"organizations_url": "https://api.github.com/users/wkambale/orgs",
"received_events_url": "https://api.github.com/users/wkambale/received_events",
"repos_url": "https://api.github.com/users/wkambale/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/wkambale/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wkambale/subscriptions",
"type": "User",
"url": "https://api.github.com/users/wkambale",
"user_view_type": "public"
}
|
[] |
closed
| false | null |
[] | null |
[
"Could you please run the following code snippet in your environment and share the exact output? This will help check for any compatibility issues within the env itself. \n\n```\nimport datasets\nimport huggingface_hub\nimport fsspec\n\nprint(\"datasets version:\", datasets.__version__)\nprint(\"huggingface_hub version:\", huggingface_hub.__version__)\nprint(\"fsspec version:\", fsspec.__version__)\n```",
"```bash\ndatasets version: 2.14.4\nhuggingface_hub version: 0.31.4\nfsspec version: 2025.3.2\n```",
"Version 2.14.4 is not the latest version available, in fact it is from August 08, 2023 (you can check here: https://pypi.org/project/datasets/#history)\n\nUse pip install datasets==3.6.0 to install a more recent version (from May 7, 2025)\n\nI also had the same problem with Colab, after updating to the latest version it was solved.\n\nI hope it helps",
"thank you @CleitonOERocha. it sure did help.\n\nupdating `datasets` to v3.6.0 and keeping `fsspec` on v2025.3.2 eliminates the issue.",
"Very helpful, thank you!"
] | 2025-05-27T13:46:05 | 2025-05-30T13:22:52 | 2025-05-30T01:26:30 |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
I have a dataset on HF [here](https://huggingface.co/datasets/kambale/luganda-english-parallel-corpus) that i've previously used to train a translation model [here](https://huggingface.co/kambale/pearl-11m-translate).
now i changed a few hyperparameters to increase number of tokens for the model, increase Transformer layers, and all
however, when i try to load the dataset, this error keeps coming up.. i have tried everything.. i have re-written the code a hundred times, and this keep coming up
### Steps to reproduce the bug
Imports:
```bash
!pip install datasets huggingface_hub fsspec
```
Python code:
```python
from datasets import load_dataset
HF_DATASET_NAME = "kambale/luganda-english-parallel-corpus"
# Load the dataset
try:
if not HF_DATASET_NAME or HF_DATASET_NAME == "YOUR_HF_DATASET_NAME":
raise ValueError(
"Please provide a valid Hugging Face dataset name."
)
dataset = load_dataset(HF_DATASET_NAME)
# Omitted code as the error happens on the line above
except ValueError as ve:
print(f"Configuration Error: {ve}")
raise
except Exception as e:
print(f"An error occurred while loading the dataset '{HF_DATASET_NAME}': {e}")
raise e
```
now, i have tried going through this [issue](https://github.com/huggingface/datasets/issues/6737) and nothing helps
### Expected behavior
loading the dataset successfully and perform splits (train, test, validation)
### Environment info
from the imports, i do not install specific versions of these libraries, so the latest or available version is installed
* `datasets` version: latest
* `Platform`: Google Colab
* `Hardware`: NVIDIA A100 GPU
* `Python` version: latest
* `huggingface_hub` version: latest
* `fsspec` version: latest
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/43061081?v=4",
"events_url": "https://api.github.com/users/wkambale/events{/privacy}",
"followers_url": "https://api.github.com/users/wkambale/followers",
"following_url": "https://api.github.com/users/wkambale/following{/other_user}",
"gists_url": "https://api.github.com/users/wkambale/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/wkambale",
"id": 43061081,
"login": "wkambale",
"node_id": "MDQ6VXNlcjQzMDYxMDgx",
"organizations_url": "https://api.github.com/users/wkambale/orgs",
"received_events_url": "https://api.github.com/users/wkambale/received_events",
"repos_url": "https://api.github.com/users/wkambale/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/wkambale/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wkambale/subscriptions",
"type": "User",
"url": "https://api.github.com/users/wkambale",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7588/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7588/timeline
| null |
completed
| null | null | 59.673611 | 18 |
https://api.github.com/repos/huggingface/datasets/issues/7583
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7583/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7583/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7583/events
|
https://github.com/huggingface/datasets/issues/7583
| 3,088,987,757 |
I_kwDODunzps64HjZt
| 7,583 |
load_dataset type stubs reject List[str] for split parameter, but runtime supports it
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/25069969?v=4",
"events_url": "https://api.github.com/users/hierr/events{/privacy}",
"followers_url": "https://api.github.com/users/hierr/followers",
"following_url": "https://api.github.com/users/hierr/following{/other_user}",
"gists_url": "https://api.github.com/users/hierr/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/hierr",
"id": 25069969,
"login": "hierr",
"node_id": "MDQ6VXNlcjI1MDY5OTY5",
"organizations_url": "https://api.github.com/users/hierr/orgs",
"received_events_url": "https://api.github.com/users/hierr/received_events",
"repos_url": "https://api.github.com/users/hierr/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/hierr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hierr/subscriptions",
"type": "User",
"url": "https://api.github.com/users/hierr",
"user_view_type": "public"
}
|
[] |
closed
| false | null |
[] | null |
[] | 2025-05-25T02:33:18 | 2025-05-26T18:29:58 | 2025-05-26T18:29:58 |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
The [load_dataset](https://huggingface.co/docs/datasets/v3.6.0/en/package_reference/loading_methods#datasets.load_dataset) method accepts a `List[str]` as the split parameter at runtime, however, the current type stubs restrict the split parameter to `Union[str, Split, None]`. This causes type checkers like Pylance to raise `reportArgumentType` errors when passing a list of strings, even though it works as intended at runtime.
### Steps to reproduce the bug
1. Use load_dataset with multiple splits e.g.:
```
from datasets import load_dataset
ds_train, ds_val, ds_test = load_dataset(
"Silly-Machine/TuPyE-Dataset",
"binary",
split=["train[:75%]", "train[75%:]", "test"]
)
```
2. Observe that code executes correctly at runtime and Pylance raises `Argument of type "List[str]" cannot be assigned to parameter "split" of type "str | Split | None"`
### Expected behavior
The type stubs for [load_dataset](https://huggingface.co/docs/datasets/v3.6.0/en/package_reference/loading_methods#datasets.load_dataset) should accept `Union[str, Split, List[str], None]` or more specific overloads for the split parameter to correctly represent runtime behavior.
### Environment info
- `datasets` version: 3.6.0
- Platform: Linux-5.15.167.4-microsoft-standard-WSL2-x86_64-with-glibc2.39
- Python version: 3.12.7
- `huggingface_hub` version: 0.32.0
- PyArrow version: 20.0.0
- Pandas version: 2.2.3
- `fsspec` version: 2025.3.0
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7583/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7583/timeline
| null |
completed
| null | null | 39.944444 | 23 |
https://api.github.com/repos/huggingface/datasets/issues/7577
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7577/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7577/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7577/events
|
https://github.com/huggingface/datasets/issues/7577
| 3,080,833,740 |
I_kwDODunzps63ocrM
| 7,577 |
arrow_schema is not compatible with list
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/164412025?v=4",
"events_url": "https://api.github.com/users/jonathanshen-upwork/events{/privacy}",
"followers_url": "https://api.github.com/users/jonathanshen-upwork/followers",
"following_url": "https://api.github.com/users/jonathanshen-upwork/following{/other_user}",
"gists_url": "https://api.github.com/users/jonathanshen-upwork/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jonathanshen-upwork",
"id": 164412025,
"login": "jonathanshen-upwork",
"node_id": "U_kgDOCcy6eQ",
"organizations_url": "https://api.github.com/users/jonathanshen-upwork/orgs",
"received_events_url": "https://api.github.com/users/jonathanshen-upwork/received_events",
"repos_url": "https://api.github.com/users/jonathanshen-upwork/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jonathanshen-upwork/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jonathanshen-upwork/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jonathanshen-upwork",
"user_view_type": "public"
}
|
[] |
closed
| false | null |
[] | null |
[
"Thanks for reporting, I'll look into it",
"Actually it looks like you just forgot parenthesis:\n\n```diff\n- f = datasets.Features({'x': list[datasets.Value(dtype='int32')]})\n+ f = datasets.Features({'x': list([datasets.Value(dtype='int32')])})\n```\n\nor simply using the `[ ]` syntax:\n\n```python\nf = datasets.Features({'x':[datasets.Value(dtype='int32')]})\n```\n\nI'm closing this issue if you don't mind",
"Ah is that what the syntax is? I don't think I was able to find an actual example of it so I assumed it was in the same way that you specify types eg. `list[int]`. This is good to know, thanks."
] | 2025-05-21T16:37:01 | 2025-05-26T18:49:51 | 2025-05-26T18:32:55 |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
```
import datasets
f = datasets.Features({'x': list[datasets.Value(dtype='int32')]})
f.arrow_schema
Traceback (most recent call last):
File "datasets/features/features.py", line 1826, in arrow_schema
return pa.schema(self.type).with_metadata({"huggingface": json.dumps(hf_metadata)})
^^^^^^^^^
File "datasets/features/features.py", line 1815, in type
return get_nested_type(self)
^^^^^^^^^^^^^^^^^^^^^
File "datasets/features/features.py", line 1252, in get_nested_type
return pa.struct(
^^^^^^^^^^
File "pyarrow/types.pxi", line 5406, in pyarrow.lib.struct
File "pyarrow/types.pxi", line 3890, in pyarrow.lib.field
File "pyarrow/types.pxi", line 5918, in pyarrow.lib.ensure_type
TypeError: DataType expected, got <class 'list'>
```
The following works
```
f = datasets.Features({'x': datasets.LargeList(datasets.Value(dtype='int32'))})
```
### Expected behavior
according to https://github.com/huggingface/datasets/blob/458f45a22c3cc9aea5f442f6f519333dcfeae9b9/src/datasets/features/features.py#L1765 python list should be a valid type specification for features
### Environment info
- `datasets` version: 3.5.1
- Platform: macOS-15.5-arm64-arm-64bit
- Python version: 3.12.9
- `huggingface_hub` version: 0.30.2
- PyArrow version: 19.0.1
- Pandas version: 2.2.3
- `fsspec` version: 2024.12.0
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7577/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7577/timeline
| null |
completed
| null | null | 121.931667 | 28 |
https://api.github.com/repos/huggingface/datasets/issues/7561
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7561/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7561/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7561/events
|
https://github.com/huggingface/datasets/issues/7561
| 3,046,302,653 |
I_kwDODunzps61kuO9
| 7,561 |
NotImplementedError: <class 'datasets.iterable_dataset.RepeatExamplesIterable'> doesn't implement num_shards yet
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/32219669?v=4",
"events_url": "https://api.github.com/users/cyanic-selkie/events{/privacy}",
"followers_url": "https://api.github.com/users/cyanic-selkie/followers",
"following_url": "https://api.github.com/users/cyanic-selkie/following{/other_user}",
"gists_url": "https://api.github.com/users/cyanic-selkie/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/cyanic-selkie",
"id": 32219669,
"login": "cyanic-selkie",
"node_id": "MDQ6VXNlcjMyMjE5NjY5",
"organizations_url": "https://api.github.com/users/cyanic-selkie/orgs",
"received_events_url": "https://api.github.com/users/cyanic-selkie/received_events",
"repos_url": "https://api.github.com/users/cyanic-selkie/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/cyanic-selkie/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cyanic-selkie/subscriptions",
"type": "User",
"url": "https://api.github.com/users/cyanic-selkie",
"user_view_type": "public"
}
|
[] |
closed
| false | null |
[] | null |
[] | 2025-05-07T15:05:42 | 2025-06-05T12:41:30 | 2025-06-05T12:41:30 |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
When using `.repeat()` on an `IterableDataset`, this error gets thrown. There is [this thread](https://discuss.huggingface.co/t/making-an-infinite-iterabledataset/146192/5) that seems to imply the fix is trivial, but I don't know anything about this codebase, so I'm opening this issue rather than attempting to open a PR.
### Steps to reproduce the bug
1. Create an `IterableDataset`.
2. Call `.repeat(None)` on it.
3. Wrap it in a pytorch `DataLoader`
4. Iterate over it.
### Expected behavior
This should work normally.
### Environment info
datasets: 3.5.0
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7561/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7561/timeline
| null |
completed
| null | null | 693.596667 | 44 |
https://api.github.com/repos/huggingface/datasets/issues/7554
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7554/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7554/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7554/events
|
https://github.com/huggingface/datasets/issues/7554
| 3,043,089,844 |
I_kwDODunzps61Yd20
| 7,554 |
datasets downloads and generates all splits, even though a single split is requested (for dataset with loading script)
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/50171988?v=4",
"events_url": "https://api.github.com/users/sei-eschwartz/events{/privacy}",
"followers_url": "https://api.github.com/users/sei-eschwartz/followers",
"following_url": "https://api.github.com/users/sei-eschwartz/following{/other_user}",
"gists_url": "https://api.github.com/users/sei-eschwartz/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sei-eschwartz",
"id": 50171988,
"login": "sei-eschwartz",
"node_id": "MDQ6VXNlcjUwMTcxOTg4",
"organizations_url": "https://api.github.com/users/sei-eschwartz/orgs",
"received_events_url": "https://api.github.com/users/sei-eschwartz/received_events",
"repos_url": "https://api.github.com/users/sei-eschwartz/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sei-eschwartz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sei-eschwartz/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sei-eschwartz",
"user_view_type": "public"
}
|
[] |
closed
| false | null |
[] | null |
[
"Hi ! there has been some effort on allowing to download only a subset of splits in https://github.com/huggingface/datasets/pull/6832 but no one has been continuing this work so far. This would be a welcomed contribution though\n\nAlso note that loading script are often unoptimized, and we recommend using datasets in standard formats like Parquet instead.\n\nBtw there is a CLI tool to convert a loading script to parquet:\n\n```\ndatasets-cli convert_to_parquet <dataset-name> --trust_remote_code\n```",
"Closing in favor of #6832 "
] | 2025-05-06T14:43:38 | 2025-05-07T14:53:45 | 2025-05-07T14:53:44 |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
`datasets` downloads and generates all splits, even though a single split is requested. [This](https://huggingface.co/datasets/jordiae/exebench) is the dataset in question. It uses a loading script. I am not 100% sure that this is a bug, because maybe with loading scripts `datasets` must actually process all the splits? But I thought loading scripts were designed to avoid this.
### Steps to reproduce the bug
See [this notebook](https://colab.research.google.com/drive/14kcXp_hgcdj-kIzK0bCG6taE-CLZPVvq?usp=sharing)
Or:
```python
from datasets import load_dataset
dataset = load_dataset('jordiae/exebench', split='test_synth', trust_remote_code=True)
```
### Expected behavior
I expected only the `test_synth` split to be downloaded and processed.
### Environment info
- `datasets` version: 3.5.1
- Platform: Linux-6.1.123+-x86_64-with-glibc2.35
- Python version: 3.11.12
- `huggingface_hub` version: 0.30.2
- PyArrow version: 18.1.0
- Pandas version: 2.2.2
- `fsspec` version: 2025.3.0
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/50171988?v=4",
"events_url": "https://api.github.com/users/sei-eschwartz/events{/privacy}",
"followers_url": "https://api.github.com/users/sei-eschwartz/followers",
"following_url": "https://api.github.com/users/sei-eschwartz/following{/other_user}",
"gists_url": "https://api.github.com/users/sei-eschwartz/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sei-eschwartz",
"id": 50171988,
"login": "sei-eschwartz",
"node_id": "MDQ6VXNlcjUwMTcxOTg4",
"organizations_url": "https://api.github.com/users/sei-eschwartz/orgs",
"received_events_url": "https://api.github.com/users/sei-eschwartz/received_events",
"repos_url": "https://api.github.com/users/sei-eschwartz/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sei-eschwartz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sei-eschwartz/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sei-eschwartz",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7554/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7554/timeline
| null |
duplicate
| null | null | 24.168333 | 50 |
https://api.github.com/repos/huggingface/datasets/issues/7546
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7546/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7546/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7546/events
|
https://github.com/huggingface/datasets/issues/7546
| 3,034,018,298 |
I_kwDODunzps6013H6
| 7,546 |
Large memory use when loading large datasets to a ZFS pool
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/6875946?v=4",
"events_url": "https://api.github.com/users/FredHaa/events{/privacy}",
"followers_url": "https://api.github.com/users/FredHaa/followers",
"following_url": "https://api.github.com/users/FredHaa/following{/other_user}",
"gists_url": "https://api.github.com/users/FredHaa/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/FredHaa",
"id": 6875946,
"login": "FredHaa",
"node_id": "MDQ6VXNlcjY4NzU5NDY=",
"organizations_url": "https://api.github.com/users/FredHaa/orgs",
"received_events_url": "https://api.github.com/users/FredHaa/received_events",
"repos_url": "https://api.github.com/users/FredHaa/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/FredHaa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/FredHaa/subscriptions",
"type": "User",
"url": "https://api.github.com/users/FredHaa",
"user_view_type": "public"
}
|
[] |
closed
| false | null |
[] | null |
[
"Hi ! datasets are memory mapped from disk, so they don't fill out your RAM. Not sure what's the source of your memory issue.\n\nWhat kind of system are you using ? and what kind of disk ?",
"Well, the fact of the matter is that my RAM is getting filled out by running the given example, as shown in [this video](https://streamable.com/usb0ql).\n\nMy system is a GPU server running Ubuntu. The disk is a SATA SSD attached to the server using a backplane. It is formatted with ZFS, mounted in /cache, and my HF_HOME is set to /cache/hf\n\nI really need this fixed, so I am more than willing to test out various suggestions you might have, or write a PR if we can figure out what is going on.",
"I'm not super familiar with ZFS, but it looks like it loads the data in memory when the files are memory mapped, which is an issue.\n\nMaybe it's a caching mechanism ? Since `datasets` accesses every memory mapped file to read a small part (the metadata of the arrow record batches), maybe ZFS brings the whole files in memory for quicker subsequent reads. This is an antipattern when it comes to lazy loading datasets of that size though",
"This is the answer.\n\nI tried changing my HF_HOME to an NFS share, and no RAM is then consumed loading the dataset.\n\nI will try to see if I can find a way to configure the ZFS pool to not cache the files (disabling the ARC/primary cache didn't work), and if I do write the solution in this issue. If I can't I guess I have to reformat my cache drive."
] | 2025-05-01T14:43:47 | 2025-05-13T13:30:09 | 2025-05-13T13:29:53 |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
When I load large parquet based datasets from the hub like `MLCommons/peoples_speech` using `load_dataset`, all my memory (500GB) is used and isn't released after loading, meaning that the process is terminated by the kernel if I try to load an additional dataset. This makes it impossible to train models using multiple large datasets.
### Steps to reproduce the bug
`uv run --with datasets==3.5.1 python`
```python
from datasets import load_dataset
load_dataset('MLCommons/peoples_speech', 'clean')
load_dataset('mozilla-foundation/common_voice_17_0', 'en')
```
### Expected behavior
I would expect that a lot less than 500GB of RAM would be required to load the dataset, or at least that the RAM usage would be cleared as soon as the dataset is loaded (and thus reside as a memory mapped file) such that other datasets can be loaded.
### Environment info
I am currently using the latest datasets==3.5.1 but I have had the same problem with multiple other versions.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/6875946?v=4",
"events_url": "https://api.github.com/users/FredHaa/events{/privacy}",
"followers_url": "https://api.github.com/users/FredHaa/followers",
"following_url": "https://api.github.com/users/FredHaa/following{/other_user}",
"gists_url": "https://api.github.com/users/FredHaa/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/FredHaa",
"id": 6875946,
"login": "FredHaa",
"node_id": "MDQ6VXNlcjY4NzU5NDY=",
"organizations_url": "https://api.github.com/users/FredHaa/orgs",
"received_events_url": "https://api.github.com/users/FredHaa/received_events",
"repos_url": "https://api.github.com/users/FredHaa/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/FredHaa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/FredHaa/subscriptions",
"type": "User",
"url": "https://api.github.com/users/FredHaa",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7546/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7546/timeline
| null |
completed
| null | null | 286.768333 | 58 |
https://api.github.com/repos/huggingface/datasets/issues/7543
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7543/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7543/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7543/events
|
https://github.com/huggingface/datasets/issues/7543
| 3,026,867,706 |
I_kwDODunzps60alX6
| 7,543 |
The memory-disk mapping failure issue of the map function(resolved, but there are some suggestions.)
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/76415358?v=4",
"events_url": "https://api.github.com/users/jxma20/events{/privacy}",
"followers_url": "https://api.github.com/users/jxma20/followers",
"following_url": "https://api.github.com/users/jxma20/following{/other_user}",
"gists_url": "https://api.github.com/users/jxma20/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jxma20",
"id": 76415358,
"login": "jxma20",
"node_id": "MDQ6VXNlcjc2NDE1MzU4",
"organizations_url": "https://api.github.com/users/jxma20/orgs",
"received_events_url": "https://api.github.com/users/jxma20/received_events",
"repos_url": "https://api.github.com/users/jxma20/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jxma20/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jxma20/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jxma20",
"user_view_type": "public"
}
|
[] |
closed
| false | null |
[] | null |
[] | 2025-04-29T03:04:59 | 2025-04-30T02:22:17 | 2025-04-30T02:22:17 |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
## bug
When the map function processes a large dataset, it temporarily stores the data in a cache file on the disk. After the data is stored, the memory occupied by it is released. Therefore, when using the map function to process a large-scale dataset, only a dataset space of the size of `writer_batch_size` will be occupied in memory.
However, I found that the map function does not actually reduce memory usage when I used it. At first, I thought there was a bug in the program, causing a memory leak—meaning the memory was not released after the data was stored in the cache. But later, I used a Linux command to check for recently modified files during program execution and found that no new files were created or modified. This indicates that the program did not store the dataset in the disk cache.
## bug solved
After modifying the parameters of the map function multiple times, I discovered the `cache_file_name` parameter. By changing it, the cache file can be stored in the specified directory. After making this change, I noticed that the cache file appeared. Initially, I found this quite incredible, but then I wondered if the cache file might have failed to be stored in a certain folder. This could be related to the fact that I don't have root privileges.
So, I delved into the source code of the map function to find out where the cache file would be stored by default. Eventually, I found the function `def _get_cache_file_path(self, fingerprint):`, which automatically generates the storage path for the cache file. The output was as follows: `/tmp/hf_datasets-j5qco9ug/cache-f2830487643b9cc2.arrow`. My hypothesis was confirmed: the lack of root privileges indeed prevented the cache file from being stored, which in turn prevented the release of memory. Therefore, changing the storage location to a folder where I have write access resolved the issue.
### Steps to reproduce the bug
my code
`train_data = train_data.map(process_fun, remove_columns=['image_name', 'question_type', 'concern', 'question', 'candidate_answers', 'answer'])`
### Expected behavior
Although my bug has been resolved, it still took me nearly a week to search for relevant information and debug the program. However, if a warning or error message about insufficient cache file write permissions could be provided during program execution, I might have been able to identify the cause more quickly. Therefore, I hope this aspect can be improved. I am documenting this bug here so that friends who encounter similar issues can solve their problems in a timely manner.
### Environment info
python: 3.10.15
datasets: 3.5.0
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/76415358?v=4",
"events_url": "https://api.github.com/users/jxma20/events{/privacy}",
"followers_url": "https://api.github.com/users/jxma20/followers",
"following_url": "https://api.github.com/users/jxma20/following{/other_user}",
"gists_url": "https://api.github.com/users/jxma20/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jxma20",
"id": 76415358,
"login": "jxma20",
"node_id": "MDQ6VXNlcjc2NDE1MzU4",
"organizations_url": "https://api.github.com/users/jxma20/orgs",
"received_events_url": "https://api.github.com/users/jxma20/received_events",
"repos_url": "https://api.github.com/users/jxma20/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jxma20/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jxma20/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jxma20",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7543/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7543/timeline
| null |
completed
| null | null | 23.288333 | 61 |
https://api.github.com/repos/huggingface/datasets/issues/7538
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7538/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7538/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7538/events
|
https://github.com/huggingface/datasets/issues/7538
| 3,023,280,056 |
I_kwDODunzps60M5e4
| 7,538 |
`IterableDataset` drops samples when resuming from a checkpoint
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
closed
| false | null |
[] | null |
[
"Thanks for reporting ! I fixed the issue using RebatchedArrowExamplesIterable before the formatted iterable"
] | 2025-04-27T19:34:49 | 2025-05-06T14:04:05 | 2025-05-06T14:03:42 |
COLLABORATOR
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
When resuming from a checkpoint, `IterableDataset` will drop samples if `num_shards % world_size == 0` and the underlying example supports `iter_arrow` and needs to be formatted.
In that case, the `FormattedExamplesIterable` fetches a batch of samples from the child iterable's `iter_arrow` and yields them one by one (after formatting). However, the child increments the `shard_example_idx` counter (in its `iter_arrow`) before returning the batch for the whole batch size, which leads to a portion of samples being skipped if the iteration (of the parent iterable) is stopped mid-batch.
Perhaps one way to avoid this would be by signalling the child iterable which samples (within the chunk) are processed by the parent and which are not, so that it can adjust the `shard_example_idx` counter accordingly. This would also mean the chunk needs to be sliced when resuming, but this is straightforward to implement.
The following is a minimal reproducer of the bug:
```python
from datasets import Dataset
from datasets.distributed import split_dataset_by_node
ds = Dataset.from_dict({"n": list(range(24))})
ds = ds.to_iterable_dataset(num_shards=4)
world_size = 4
rank = 0
ds_rank = split_dataset_by_node(ds, rank, world_size)
it = iter(ds_rank)
examples = []
for idx, example in enumerate(it):
examples.append(example)
if idx == 2:
state_dict = ds_rank.state_dict()
break
ds_rank.load_state_dict(state_dict)
it_resumed = iter(ds_rank)
examples_resumed = examples[:]
for example in it:
examples.append(example)
for example in it_resumed:
examples_resumed.append(example)
print("ORIGINAL ITER EXAMPLES:", examples)
print("RESUMED ITER EXAMPLES:", examples_resumed)
```
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7538/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7538/timeline
| null |
completed
| null | null | 210.481389 | 66 |
https://api.github.com/repos/huggingface/datasets/issues/7536
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7536/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7536/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7536/events
|
https://github.com/huggingface/datasets/issues/7536
| 3,018,425,549 |
I_kwDODunzps6z6YTN
| 7,536 |
[Errno 13] Permission denied: on `.incomplete` file
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/1282383?v=4",
"events_url": "https://api.github.com/users/ryan-clancy/events{/privacy}",
"followers_url": "https://api.github.com/users/ryan-clancy/followers",
"following_url": "https://api.github.com/users/ryan-clancy/following{/other_user}",
"gists_url": "https://api.github.com/users/ryan-clancy/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ryan-clancy",
"id": 1282383,
"login": "ryan-clancy",
"node_id": "MDQ6VXNlcjEyODIzODM=",
"organizations_url": "https://api.github.com/users/ryan-clancy/orgs",
"received_events_url": "https://api.github.com/users/ryan-clancy/received_events",
"repos_url": "https://api.github.com/users/ryan-clancy/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ryan-clancy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ryan-clancy/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ryan-clancy",
"user_view_type": "public"
}
|
[] |
closed
| false | null |
[] | null |
[
"It must be an issue with umask being used by multiple threads indeed. Maybe we can try to make a thread safe function to apply the umask (using filelock for example)",
"> It must be an issue with umask being used by multiple threads indeed. Maybe we can try to make a thread safe function to apply the umask (using filelock for example)\n\n@lhoestq is this something which can go in a 3.5.1 release?",
"Yes for sure",
"@lhoestq - can you take a look at https://github.com/huggingface/datasets/pull/7547/?"
] | 2025-04-24T20:52:45 | 2025-05-06T13:05:01 | 2025-05-06T13:05:01 |
CONTRIBUTOR
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
When downloading a dataset, we frequently hit the below Permission Denied error. This looks to happen (at least) across datasets in HF, S3, and GCS.
It looks like the `temp_file` being passed [here](https://github.com/huggingface/datasets/blob/main/src/datasets/utils/file_utils.py#L412) can sometimes be created with `000` permissions leading to the permission denied error (the user running the code is still the owner of the file). Deleting that particular file and re-running the code with 0 changes will usually succeed.
Is there some race condition happening with the [umask](https://github.com/huggingface/datasets/blob/main/src/datasets/utils/file_utils.py#L416), which is process global, and the [file creation](https://github.com/huggingface/datasets/blob/main/src/datasets/utils/file_utils.py#L404)?
```
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
.venv/lib/python3.12/site-packages/datasets/load.py:2084: in load_dataset
builder_instance.download_and_prepare(
.venv/lib/python3.12/site-packages/datasets/builder.py:925: in download_and_prepare
self._download_and_prepare(
.venv/lib/python3.12/site-packages/datasets/builder.py:1649: in _download_and_prepare
super()._download_and_prepare(
.venv/lib/python3.12/site-packages/datasets/builder.py:979: in _download_and_prepare
split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
.venv/lib/python3.12/site-packages/datasets/packaged_modules/folder_based_builder/folder_based_builder.py:120: in _split_generators
downloaded_files = dl_manager.download(files)
.venv/lib/python3.12/site-packages/datasets/download/download_manager.py:159: in download
downloaded_path_or_paths = map_nested(
.venv/lib/python3.12/site-packages/datasets/utils/py_utils.py:514: in map_nested
_single_map_nested((function, obj, batched, batch_size, types, None, True, None))
.venv/lib/python3.12/site-packages/datasets/utils/py_utils.py:382: in _single_map_nested
return [mapped_item for batch in iter_batched(data_struct, batch_size) for mapped_item in function(batch)]
.venv/lib/python3.12/site-packages/datasets/download/download_manager.py:206: in _download_batched
return thread_map(
.venv/lib/python3.12/site-packages/tqdm/contrib/concurrent.py:69: in thread_map
return _executor_map(ThreadPoolExecutor, fn, *iterables, **tqdm_kwargs)
.venv/lib/python3.12/site-packages/tqdm/contrib/concurrent.py:51: in _executor_map
return list(tqdm_class(ex.map(fn, *iterables, chunksize=chunksize), **kwargs))
.venv/lib/python3.12/site-packages/tqdm/std.py:1181: in __iter__
for obj in iterable:
../../../_tool/Python/3.12.10/x64/lib/python3.12/concurrent/futures/_base.py:619: in result_iterator
yield _result_or_cancel(fs.pop())
../../../_tool/Python/3.12.10/x64/lib/python3.12/concurrent/futures/_base.py:317: in _result_or_cancel
return fut.result(timeout)
../../../_tool/Python/3.12.10/x64/lib/python3.12/concurrent/futures/_base.py:449: in result
return self.__get_result()
../../../_tool/Python/3.12.10/x64/lib/python3.12/concurrent/futures/_base.py:401: in __get_result
raise self._exception
../../../_tool/Python/3.12.10/x64/lib/python3.12/concurrent/futures/thread.py:59: in run
result = self.fn(*self.args, **self.kwargs)
.venv/lib/python3.12/site-packages/datasets/download/download_manager.py:229: in _download_single
out = cached_path(url_or_filename, download_config=download_config)
.venv/lib/python3.12/site-packages/datasets/utils/file_utils.py:206: in cached_path
output_path = get_from_cache(
.venv/lib/python3.12/site-packages/datasets/utils/file_utils.py:412: in get_from_cache
fsspec_get(url, temp_file, storage_options=storage_options, desc=download_desc, disable_tqdm=disable_tqdm)
.venv/lib/python3.12/site-packages/datasets/utils/file_utils.py:331: in fsspec_get
fs.get_file(path, temp_file.name, callback=callback)
.venv/lib/python3.12/site-packages/fsspec/asyn.py:118: in wrapper
return sync(self.loop, func, *args, **kwargs)
.venv/lib/python3.12/site-packages/fsspec/asyn.py:103: in sync
raise return_result
.venv/lib/python3.12/site-packages/fsspec/asyn.py:56: in _runner
result[0] = await coro
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <s3fs.core.S3FileSystem object at 0x7f27c18b2e70>
rpath = '<my-bucket>/<my-prefix>/img_1.jpg'
lpath = '/home/runner/_work/_temp/hf_cache/downloads/6c97983efa4e24e534557724655df8247a0bd04326cdfc4a95b638c11e78222d.incomplete'
callback = <datasets.utils.file_utils.TqdmCallback object at 0x7f27c00cdbe0>
version_id = None, kwargs = {}
_open_file = <function S3FileSystem._get_file.<locals>._open_file at 0x7f27628d1120>
body = <StreamingBody at 0x7f276344fa80 for ClientResponse at 0x7f27c015fce0>
content_length = 521923, failed_reads = 0, bytes_read = 0
async def _get_file(
self, rpath, lpath, callback=_DEFAULT_CALLBACK, version_id=None, **kwargs
):
if os.path.isdir(lpath):
return
bucket, key, vers = self.split_path(rpath)
async def _open_file(range: int):
kw = self.req_kw.copy()
if range:
kw["Range"] = f"bytes={range}-"
resp = await self._call_s3(
"get_object",
Bucket=bucket,
Key=key,
**version_id_kw(version_id or vers),
**kw,
)
return resp["Body"], resp.get("ContentLength", None)
body, content_length = await _open_file(range=0)
callback.set_size(content_length)
failed_reads = 0
bytes_read = 0
try:
> with open(lpath, "wb") as f0:
E PermissionError: [Errno 13] Permission denied: '/home/runner/_work/_temp/hf_cache/downloads/6c97983efa4e24e534557724655df8247a0bd04326cdfc4a95b638c11e78222d.incomplete'
.venv/lib/python3.12/site-packages/s3fs/core.py:1355: PermissionError
```
### Steps to reproduce the bug
I believe this is a race condition and cannot reliably re-produce it, but it happens fairly frequently in our GitHub Actions tests and can also be re-produced (with lesser frequency) on cloud VMs.
### Expected behavior
The dataset loads properly with no permission denied error.
### Environment info
- `datasets` version: 3.5.0
- Platform: Linux-5.10.0-34-cloud-amd64-x86_64-with-glibc2.31
- Python version: 3.12.10
- `huggingface_hub` version: 0.30.2
- PyArrow version: 19.0.1
- Pandas version: 2.2.3
- `fsspec` version: 2024.12.0
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7536/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7536/timeline
| null |
completed
| null | null | 280.204444 | 68 |
https://api.github.com/repos/huggingface/datasets/issues/7530
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7530/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7530/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7530/events
|
https://github.com/huggingface/datasets/issues/7530
| 3,007,452,499 |
I_kwDODunzps6zQhVT
| 7,530 |
How to solve "Spaces stuck in Building" problems
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4",
"events_url": "https://api.github.com/users/ghost/events{/privacy}",
"followers_url": "https://api.github.com/users/ghost/followers",
"following_url": "https://api.github.com/users/ghost/following{/other_user}",
"gists_url": "https://api.github.com/users/ghost/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ghost",
"id": 10137,
"login": "ghost",
"node_id": "MDQ6VXNlcjEwMTM3",
"organizations_url": "https://api.github.com/users/ghost/orgs",
"received_events_url": "https://api.github.com/users/ghost/received_events",
"repos_url": "https://api.github.com/users/ghost/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ghost/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ghost/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ghost",
"user_view_type": "public"
}
|
[] |
closed
| false | null |
[] | null |
[
"I'm facing the same issue—Space stuck in \"Building\" even after restart and Factory rebuild. Any fix?\n",
"> I'm facing the same issue—Space stuck in \"Building\" even after restart and Factory rebuild. Any fix?\n\nAlso see https://github.com/huggingface/huggingface_hub/issues/3019",
"I'm facing the same issue. The build fails with the same error, and restarting won't help. Is there a fix or ETA? "
] | 2025-04-21T03:08:38 | 2025-04-22T07:49:52 | 2025-04-22T07:49:52 |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
Public spaces may stuck in Building after restarting, error log as follows:
build error
Unexpected job error
ERROR: failed to push spaces-registry.huggingface.tech/spaces/*:cpu-*-*: unexpected status from HEAD request to https://spaces-registry.huggingface.tech/v2/spaces/*/manifests/cpu-*-*: 401 Unauthorized
### Steps to reproduce the bug
Restart space / Factory rebuild cannot avoid it
### Expected behavior
Fix this problem
### Environment info
no requirements.txt can still happen
python gradio spaces
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7530/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7530/timeline
| null |
completed
| null | null | 28.687222 | 74 |
https://api.github.com/repos/huggingface/datasets/issues/7517
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7517/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7517/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7517/events
|
https://github.com/huggingface/datasets/issues/7517
| 2,996,106,077 |
I_kwDODunzps6ylPNd
| 7,517 |
Image Feature in Datasets Library Fails to Handle bytearray Objects from Spark DataFrames
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/73196164?v=4",
"events_url": "https://api.github.com/users/giraffacarp/events{/privacy}",
"followers_url": "https://api.github.com/users/giraffacarp/followers",
"following_url": "https://api.github.com/users/giraffacarp/following{/other_user}",
"gists_url": "https://api.github.com/users/giraffacarp/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/giraffacarp",
"id": 73196164,
"login": "giraffacarp",
"node_id": "MDQ6VXNlcjczMTk2MTY0",
"organizations_url": "https://api.github.com/users/giraffacarp/orgs",
"received_events_url": "https://api.github.com/users/giraffacarp/received_events",
"repos_url": "https://api.github.com/users/giraffacarp/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/giraffacarp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/giraffacarp/subscriptions",
"type": "User",
"url": "https://api.github.com/users/giraffacarp",
"user_view_type": "public"
}
|
[] |
closed
| false |
{
"avatar_url": "https://avatars.githubusercontent.com/u/73196164?v=4",
"events_url": "https://api.github.com/users/giraffacarp/events{/privacy}",
"followers_url": "https://api.github.com/users/giraffacarp/followers",
"following_url": "https://api.github.com/users/giraffacarp/following{/other_user}",
"gists_url": "https://api.github.com/users/giraffacarp/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/giraffacarp",
"id": 73196164,
"login": "giraffacarp",
"node_id": "MDQ6VXNlcjczMTk2MTY0",
"organizations_url": "https://api.github.com/users/giraffacarp/orgs",
"received_events_url": "https://api.github.com/users/giraffacarp/received_events",
"repos_url": "https://api.github.com/users/giraffacarp/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/giraffacarp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/giraffacarp/subscriptions",
"type": "User",
"url": "https://api.github.com/users/giraffacarp",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/73196164?v=4",
"events_url": "https://api.github.com/users/giraffacarp/events{/privacy}",
"followers_url": "https://api.github.com/users/giraffacarp/followers",
"following_url": "https://api.github.com/users/giraffacarp/following{/other_user}",
"gists_url": "https://api.github.com/users/giraffacarp/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/giraffacarp",
"id": 73196164,
"login": "giraffacarp",
"node_id": "MDQ6VXNlcjczMTk2MTY0",
"organizations_url": "https://api.github.com/users/giraffacarp/orgs",
"received_events_url": "https://api.github.com/users/giraffacarp/received_events",
"repos_url": "https://api.github.com/users/giraffacarp/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/giraffacarp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/giraffacarp/subscriptions",
"type": "User",
"url": "https://api.github.com/users/giraffacarp",
"user_view_type": "public"
}
] | null |
[
"Hi ! The `Image()` type accepts either\n- a `bytes` object containing the image bytes\n- a `str` object containing the image path\n- a `PIL.Image` object\n\nbut it doesn't support `bytearray`, maybe you can convert to `bytes` beforehand ?",
"Hi @lhoestq, \nconverting to bytes is certainly possible and would work around the error. However, the core issue is that `Dataset` and `IterableDataset` behave differently with the features.\n\nI’d be happy to work on a fix for this issue.",
"I see, that's an issue indeed. Feel free to ping me if I can help with reviews or any guidance\n\nIf it can help, the code that takes a Spark DataFrame and iterates on the rows for `IterableDataset` is here: \n\nhttps://github.com/huggingface/datasets/blob/6a96bf313085d7538a999b929a550e14e1d406c9/src/datasets/packaged_modules/spark/spark.py#L49-L53",
"#self-assign"
] | 2025-04-15T11:29:17 | 2025-05-07T14:17:30 | 2025-05-07T14:17:30 |
CONTRIBUTOR
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
When using `IterableDataset.from_spark()` with a Spark DataFrame containing image data, the `Image` feature class fails to properly process this data type, causing an `AttributeError: 'bytearray' object has no attribute 'get'`
### Steps to reproduce the bug
1. Create a Spark DataFrame with a column containing image data as bytearray objects
2. Define a Feature schema with an Image feature
3. Create an IterableDataset using `IterableDataset.from_spark()`
4. Attempt to iterate through the dataset
```
from pyspark.sql import SparkSession
from datasets import Dataset, IterableDataset, Features, Image, Value
# initialize spark
spark = SparkSession.builder.appName("MinimalRepro").getOrCreate()
# create spark dataframe
data = [(0, open("image.png", "rb").read())]
df = spark.createDataFrame(data, "idx: int, image: binary")
# convert to dataset
features = Features({"idx": Value("int64"), "image": Image()})
ds = Dataset.from_spark(df, features=features)
ds_iter = IterableDataset.from_spark(df, features=features)
# iterate
print(next(iter(ds)))
print(next(iter(ds_iter)))
```
### Expected behavior
The features should work on `IterableDataset` the same way they work on `Dataset`
### Environment info
- `datasets` version: 3.5.0
- Platform: macOS-15.3.2-arm64-arm-64bit
- Python version: 3.12.7
- `huggingface_hub` version: 0.30.2
- PyArrow version: 18.1.0
- Pandas version: 2.2.3
- `fsspec` version: 2024.12.0
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7517/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7517/timeline
| null |
completed
| null | null | 530.803611 | 87 |
https://api.github.com/repos/huggingface/datasets/issues/7516
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7516/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7516/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7516/events
|
https://github.com/huggingface/datasets/issues/7516
| 2,995,780,283 |
I_kwDODunzps6yj_q7
| 7,516 |
unsloth/DeepSeek-R1-Distill-Qwen-32B server error
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/164353862?v=4",
"events_url": "https://api.github.com/users/Editor-1/events{/privacy}",
"followers_url": "https://api.github.com/users/Editor-1/followers",
"following_url": "https://api.github.com/users/Editor-1/following{/other_user}",
"gists_url": "https://api.github.com/users/Editor-1/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Editor-1",
"id": 164353862,
"login": "Editor-1",
"node_id": "U_kgDOCcvXRg",
"organizations_url": "https://api.github.com/users/Editor-1/orgs",
"received_events_url": "https://api.github.com/users/Editor-1/received_events",
"repos_url": "https://api.github.com/users/Editor-1/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Editor-1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Editor-1/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Editor-1",
"user_view_type": "public"
}
|
[] |
closed
| false | null |
[] | null |
[] | 2025-04-15T09:26:53 | 2025-04-15T09:57:26 | 2025-04-15T09:57:26 |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
hfhubhttperror: 500 server error: internal server error for url: https://huggingface.co/api/models/unsloth/deepseek-r1-distill-qwen-32b-bnb-4bit/commits/main (request id: root=1-67fe23fa-3a2150eb444c2a823c388579;de3aed68-c397-4da5-94d4-6565efd3b919) internal error - we're working hard to fix this as soon as possible!
### Steps to reproduce the bug
unsloth/DeepSeek-R1-Distill-Qwen-32B server error
### Expected behavior
Network repair
### Environment info
The web side is also unavailable
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/164353862?v=4",
"events_url": "https://api.github.com/users/Editor-1/events{/privacy}",
"followers_url": "https://api.github.com/users/Editor-1/followers",
"following_url": "https://api.github.com/users/Editor-1/following{/other_user}",
"gists_url": "https://api.github.com/users/Editor-1/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Editor-1",
"id": 164353862,
"login": "Editor-1",
"node_id": "U_kgDOCcvXRg",
"organizations_url": "https://api.github.com/users/Editor-1/orgs",
"received_events_url": "https://api.github.com/users/Editor-1/received_events",
"repos_url": "https://api.github.com/users/Editor-1/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Editor-1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Editor-1/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Editor-1",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7516/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7516/timeline
| null |
completed
| null | null | 0.509167 | 88 |
https://api.github.com/repos/huggingface/datasets/issues/7515
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7515/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7515/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7515/events
|
https://github.com/huggingface/datasets/issues/7515
| 2,995,082,418 |
I_kwDODunzps6yhVSy
| 7,515 |
`concatenate_datasets` does not preserve Pytorch format for IterableDataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/5140987?v=4",
"events_url": "https://api.github.com/users/francescorubbo/events{/privacy}",
"followers_url": "https://api.github.com/users/francescorubbo/followers",
"following_url": "https://api.github.com/users/francescorubbo/following{/other_user}",
"gists_url": "https://api.github.com/users/francescorubbo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/francescorubbo",
"id": 5140987,
"login": "francescorubbo",
"node_id": "MDQ6VXNlcjUxNDA5ODc=",
"organizations_url": "https://api.github.com/users/francescorubbo/orgs",
"received_events_url": "https://api.github.com/users/francescorubbo/received_events",
"repos_url": "https://api.github.com/users/francescorubbo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/francescorubbo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/francescorubbo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/francescorubbo",
"user_view_type": "public"
}
|
[] |
closed
| false | null |
[] | null |
[
"Hi ! Oh indeed it would be cool to return the same format in that case. Would you like to submit a PR ? The function that does the concatenation is here:\n\nhttps://github.com/huggingface/datasets/blob/90e5bf8a8599b625d6103ee5ac83b98269991141/src/datasets/iterable_dataset.py#L3375-L3380",
"Thank you for the pointer, @lhoestq ! See #7522 "
] | 2025-04-15T04:36:34 | 2025-05-19T15:07:38 | 2025-05-19T15:07:38 |
CONTRIBUTOR
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
When concatenating datasets with `concatenate_datasets`, I would expect the resulting combined dataset to be in the same format as the inputs (assuming it's consistent). This is indeed the behavior when combining `Dataset`, but not when combining `IterableDataset`. Specifically, when applying `concatenate_datasets` to a list of `IterableDataset` in Pytorch format (i.e. using `.with_format(Pytorch)`), the output `IterableDataset` is not in Pytorch format.
### Steps to reproduce the bug
```
import datasets
ds = datasets.Dataset.from_dict({"a": [1,2,3]})
iterable_ds = ds.to_iterable_dataset()
datasets.concatenate_datasets([ds.with_format("torch")]) # <- this preserves Pytorch format
datasets.concatenate_datasets([iterable_ds.with_format("torch")]) # <- this does NOT preserves Pytorch format
```
### Expected behavior
Pytorch format should be preserved when combining IterableDataset in Pytorch format.
### Environment info
datasets==3.5.0, Python 3.11.11, torch==2.2.2
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7515/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7515/timeline
| null |
completed
| null | null | 826.517778 | 89 |
https://api.github.com/repos/huggingface/datasets/issues/7588
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7588/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7588/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7588/events
|
https://github.com/huggingface/datasets/issues/7588
| 3,094,012,025 |
I_kwDODunzps64auB5
| 7,588 |
ValueError: Invalid pattern: '**' can only be an entire path component [Colab]
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/43061081?v=4",
"events_url": "https://api.github.com/users/wkambale/events{/privacy}",
"followers_url": "https://api.github.com/users/wkambale/followers",
"following_url": "https://api.github.com/users/wkambale/following{/other_user}",
"gists_url": "https://api.github.com/users/wkambale/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/wkambale",
"id": 43061081,
"login": "wkambale",
"node_id": "MDQ6VXNlcjQzMDYxMDgx",
"organizations_url": "https://api.github.com/users/wkambale/orgs",
"received_events_url": "https://api.github.com/users/wkambale/received_events",
"repos_url": "https://api.github.com/users/wkambale/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/wkambale/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wkambale/subscriptions",
"type": "User",
"url": "https://api.github.com/users/wkambale",
"user_view_type": "public"
}
|
[] |
closed
| false | null |
[] | null |
[
"Could you please run the following code snippet in your environment and share the exact output? This will help check for any compatibility issues within the env itself. \n\n```\nimport datasets\nimport huggingface_hub\nimport fsspec\n\nprint(\"datasets version:\", datasets.__version__)\nprint(\"huggingface_hub version:\", huggingface_hub.__version__)\nprint(\"fsspec version:\", fsspec.__version__)\n```",
"```bash\ndatasets version: 2.14.4\nhuggingface_hub version: 0.31.4\nfsspec version: 2025.3.2\n```",
"Version 2.14.4 is not the latest version available, in fact it is from August 08, 2023 (you can check here: https://pypi.org/project/datasets/#history)\n\nUse pip install datasets==3.6.0 to install a more recent version (from May 7, 2025)\n\nI also had the same problem with Colab, after updating to the latest version it was solved.\n\nI hope it helps",
"thank you @CleitonOERocha. it sure did help.\n\nupdating `datasets` to v3.6.0 and keeping `fsspec` on v2025.3.2 eliminates the issue.",
"Very helpful, thank you!"
] | 2025-05-27T13:46:05 | 2025-05-30T13:22:52 | 2025-05-30T01:26:30 |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
I have a dataset on HF [here](https://huggingface.co/datasets/kambale/luganda-english-parallel-corpus) that i've previously used to train a translation model [here](https://huggingface.co/kambale/pearl-11m-translate).
now i changed a few hyperparameters to increase number of tokens for the model, increase Transformer layers, and all
however, when i try to load the dataset, this error keeps coming up.. i have tried everything.. i have re-written the code a hundred times, and this keep coming up
### Steps to reproduce the bug
Imports:
```bash
!pip install datasets huggingface_hub fsspec
```
Python code:
```python
from datasets import load_dataset
HF_DATASET_NAME = "kambale/luganda-english-parallel-corpus"
# Load the dataset
try:
if not HF_DATASET_NAME or HF_DATASET_NAME == "YOUR_HF_DATASET_NAME":
raise ValueError(
"Please provide a valid Hugging Face dataset name."
)
dataset = load_dataset(HF_DATASET_NAME)
# Omitted code as the error happens on the line above
except ValueError as ve:
print(f"Configuration Error: {ve}")
raise
except Exception as e:
print(f"An error occurred while loading the dataset '{HF_DATASET_NAME}': {e}")
raise e
```
now, i have tried going through this [issue](https://github.com/huggingface/datasets/issues/6737) and nothing helps
### Expected behavior
loading the dataset successfully and perform splits (train, test, validation)
### Environment info
from the imports, i do not install specific versions of these libraries, so the latest or available version is installed
* `datasets` version: latest
* `Platform`: Google Colab
* `Hardware`: NVIDIA A100 GPU
* `Python` version: latest
* `huggingface_hub` version: latest
* `fsspec` version: latest
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/43061081?v=4",
"events_url": "https://api.github.com/users/wkambale/events{/privacy}",
"followers_url": "https://api.github.com/users/wkambale/followers",
"following_url": "https://api.github.com/users/wkambale/following{/other_user}",
"gists_url": "https://api.github.com/users/wkambale/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/wkambale",
"id": 43061081,
"login": "wkambale",
"node_id": "MDQ6VXNlcjQzMDYxMDgx",
"organizations_url": "https://api.github.com/users/wkambale/orgs",
"received_events_url": "https://api.github.com/users/wkambale/received_events",
"repos_url": "https://api.github.com/users/wkambale/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/wkambale/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wkambale/subscriptions",
"type": "User",
"url": "https://api.github.com/users/wkambale",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7588/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7588/timeline
| null |
completed
| null | null | 59.673611 | 118 |
https://api.github.com/repos/huggingface/datasets/issues/7583
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7583/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7583/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7583/events
|
https://github.com/huggingface/datasets/issues/7583
| 3,088,987,757 |
I_kwDODunzps64HjZt
| 7,583 |
load_dataset type stubs reject List[str] for split parameter, but runtime supports it
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/25069969?v=4",
"events_url": "https://api.github.com/users/hierr/events{/privacy}",
"followers_url": "https://api.github.com/users/hierr/followers",
"following_url": "https://api.github.com/users/hierr/following{/other_user}",
"gists_url": "https://api.github.com/users/hierr/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/hierr",
"id": 25069969,
"login": "hierr",
"node_id": "MDQ6VXNlcjI1MDY5OTY5",
"organizations_url": "https://api.github.com/users/hierr/orgs",
"received_events_url": "https://api.github.com/users/hierr/received_events",
"repos_url": "https://api.github.com/users/hierr/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/hierr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hierr/subscriptions",
"type": "User",
"url": "https://api.github.com/users/hierr",
"user_view_type": "public"
}
|
[] |
closed
| false | null |
[] | null |
[] | 2025-05-25T02:33:18 | 2025-05-26T18:29:58 | 2025-05-26T18:29:58 |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
The [load_dataset](https://huggingface.co/docs/datasets/v3.6.0/en/package_reference/loading_methods#datasets.load_dataset) method accepts a `List[str]` as the split parameter at runtime, however, the current type stubs restrict the split parameter to `Union[str, Split, None]`. This causes type checkers like Pylance to raise `reportArgumentType` errors when passing a list of strings, even though it works as intended at runtime.
### Steps to reproduce the bug
1. Use load_dataset with multiple splits e.g.:
```
from datasets import load_dataset
ds_train, ds_val, ds_test = load_dataset(
"Silly-Machine/TuPyE-Dataset",
"binary",
split=["train[:75%]", "train[75%:]", "test"]
)
```
2. Observe that code executes correctly at runtime and Pylance raises `Argument of type "List[str]" cannot be assigned to parameter "split" of type "str | Split | None"`
### Expected behavior
The type stubs for [load_dataset](https://huggingface.co/docs/datasets/v3.6.0/en/package_reference/loading_methods#datasets.load_dataset) should accept `Union[str, Split, List[str], None]` or more specific overloads for the split parameter to correctly represent runtime behavior.
### Environment info
- `datasets` version: 3.6.0
- Platform: Linux-5.15.167.4-microsoft-standard-WSL2-x86_64-with-glibc2.39
- Python version: 3.12.7
- `huggingface_hub` version: 0.32.0
- PyArrow version: 20.0.0
- Pandas version: 2.2.3
- `fsspec` version: 2025.3.0
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7583/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7583/timeline
| null |
completed
| null | null | 39.944444 | 123 |
https://api.github.com/repos/huggingface/datasets/issues/7577
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7577/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7577/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7577/events
|
https://github.com/huggingface/datasets/issues/7577
| 3,080,833,740 |
I_kwDODunzps63ocrM
| 7,577 |
arrow_schema is not compatible with list
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/164412025?v=4",
"events_url": "https://api.github.com/users/jonathanshen-upwork/events{/privacy}",
"followers_url": "https://api.github.com/users/jonathanshen-upwork/followers",
"following_url": "https://api.github.com/users/jonathanshen-upwork/following{/other_user}",
"gists_url": "https://api.github.com/users/jonathanshen-upwork/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jonathanshen-upwork",
"id": 164412025,
"login": "jonathanshen-upwork",
"node_id": "U_kgDOCcy6eQ",
"organizations_url": "https://api.github.com/users/jonathanshen-upwork/orgs",
"received_events_url": "https://api.github.com/users/jonathanshen-upwork/received_events",
"repos_url": "https://api.github.com/users/jonathanshen-upwork/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jonathanshen-upwork/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jonathanshen-upwork/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jonathanshen-upwork",
"user_view_type": "public"
}
|
[] |
closed
| false | null |
[] | null |
[
"Thanks for reporting, I'll look into it",
"Actually it looks like you just forgot parenthesis:\n\n```diff\n- f = datasets.Features({'x': list[datasets.Value(dtype='int32')]})\n+ f = datasets.Features({'x': list([datasets.Value(dtype='int32')])})\n```\n\nor simply using the `[ ]` syntax:\n\n```python\nf = datasets.Features({'x':[datasets.Value(dtype='int32')]})\n```\n\nI'm closing this issue if you don't mind",
"Ah is that what the syntax is? I don't think I was able to find an actual example of it so I assumed it was in the same way that you specify types eg. `list[int]`. This is good to know, thanks."
] | 2025-05-21T16:37:01 | 2025-05-26T18:49:51 | 2025-05-26T18:32:55 |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
```
import datasets
f = datasets.Features({'x': list[datasets.Value(dtype='int32')]})
f.arrow_schema
Traceback (most recent call last):
File "datasets/features/features.py", line 1826, in arrow_schema
return pa.schema(self.type).with_metadata({"huggingface": json.dumps(hf_metadata)})
^^^^^^^^^
File "datasets/features/features.py", line 1815, in type
return get_nested_type(self)
^^^^^^^^^^^^^^^^^^^^^
File "datasets/features/features.py", line 1252, in get_nested_type
return pa.struct(
^^^^^^^^^^
File "pyarrow/types.pxi", line 5406, in pyarrow.lib.struct
File "pyarrow/types.pxi", line 3890, in pyarrow.lib.field
File "pyarrow/types.pxi", line 5918, in pyarrow.lib.ensure_type
TypeError: DataType expected, got <class 'list'>
```
The following works
```
f = datasets.Features({'x': datasets.LargeList(datasets.Value(dtype='int32'))})
```
### Expected behavior
according to https://github.com/huggingface/datasets/blob/458f45a22c3cc9aea5f442f6f519333dcfeae9b9/src/datasets/features/features.py#L1765 python list should be a valid type specification for features
### Environment info
- `datasets` version: 3.5.1
- Platform: macOS-15.5-arm64-arm-64bit
- Python version: 3.12.9
- `huggingface_hub` version: 0.30.2
- PyArrow version: 19.0.1
- Pandas version: 2.2.3
- `fsspec` version: 2024.12.0
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7577/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7577/timeline
| null |
completed
| null | null | 121.931667 | 128 |
https://api.github.com/repos/huggingface/datasets/issues/7561
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7561/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7561/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7561/events
|
https://github.com/huggingface/datasets/issues/7561
| 3,046,302,653 |
I_kwDODunzps61kuO9
| 7,561 |
NotImplementedError: <class 'datasets.iterable_dataset.RepeatExamplesIterable'> doesn't implement num_shards yet
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/32219669?v=4",
"events_url": "https://api.github.com/users/cyanic-selkie/events{/privacy}",
"followers_url": "https://api.github.com/users/cyanic-selkie/followers",
"following_url": "https://api.github.com/users/cyanic-selkie/following{/other_user}",
"gists_url": "https://api.github.com/users/cyanic-selkie/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/cyanic-selkie",
"id": 32219669,
"login": "cyanic-selkie",
"node_id": "MDQ6VXNlcjMyMjE5NjY5",
"organizations_url": "https://api.github.com/users/cyanic-selkie/orgs",
"received_events_url": "https://api.github.com/users/cyanic-selkie/received_events",
"repos_url": "https://api.github.com/users/cyanic-selkie/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/cyanic-selkie/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cyanic-selkie/subscriptions",
"type": "User",
"url": "https://api.github.com/users/cyanic-selkie",
"user_view_type": "public"
}
|
[] |
closed
| false | null |
[] | null |
[] | 2025-05-07T15:05:42 | 2025-06-05T12:41:30 | 2025-06-05T12:41:30 |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
When using `.repeat()` on an `IterableDataset`, this error gets thrown. There is [this thread](https://discuss.huggingface.co/t/making-an-infinite-iterabledataset/146192/5) that seems to imply the fix is trivial, but I don't know anything about this codebase, so I'm opening this issue rather than attempting to open a PR.
### Steps to reproduce the bug
1. Create an `IterableDataset`.
2. Call `.repeat(None)` on it.
3. Wrap it in a pytorch `DataLoader`
4. Iterate over it.
### Expected behavior
This should work normally.
### Environment info
datasets: 3.5.0
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7561/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7561/timeline
| null |
completed
| null | null | 693.596667 | 144 |
https://api.github.com/repos/huggingface/datasets/issues/7554
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7554/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7554/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7554/events
|
https://github.com/huggingface/datasets/issues/7554
| 3,043,089,844 |
I_kwDODunzps61Yd20
| 7,554 |
datasets downloads and generates all splits, even though a single split is requested (for dataset with loading script)
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/50171988?v=4",
"events_url": "https://api.github.com/users/sei-eschwartz/events{/privacy}",
"followers_url": "https://api.github.com/users/sei-eschwartz/followers",
"following_url": "https://api.github.com/users/sei-eschwartz/following{/other_user}",
"gists_url": "https://api.github.com/users/sei-eschwartz/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sei-eschwartz",
"id": 50171988,
"login": "sei-eschwartz",
"node_id": "MDQ6VXNlcjUwMTcxOTg4",
"organizations_url": "https://api.github.com/users/sei-eschwartz/orgs",
"received_events_url": "https://api.github.com/users/sei-eschwartz/received_events",
"repos_url": "https://api.github.com/users/sei-eschwartz/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sei-eschwartz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sei-eschwartz/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sei-eschwartz",
"user_view_type": "public"
}
|
[] |
closed
| false | null |
[] | null |
[
"Hi ! there has been some effort on allowing to download only a subset of splits in https://github.com/huggingface/datasets/pull/6832 but no one has been continuing this work so far. This would be a welcomed contribution though\n\nAlso note that loading script are often unoptimized, and we recommend using datasets in standard formats like Parquet instead.\n\nBtw there is a CLI tool to convert a loading script to parquet:\n\n```\ndatasets-cli convert_to_parquet <dataset-name> --trust_remote_code\n```",
"Closing in favor of #6832 "
] | 2025-05-06T14:43:38 | 2025-05-07T14:53:45 | 2025-05-07T14:53:44 |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
`datasets` downloads and generates all splits, even though a single split is requested. [This](https://huggingface.co/datasets/jordiae/exebench) is the dataset in question. It uses a loading script. I am not 100% sure that this is a bug, because maybe with loading scripts `datasets` must actually process all the splits? But I thought loading scripts were designed to avoid this.
### Steps to reproduce the bug
See [this notebook](https://colab.research.google.com/drive/14kcXp_hgcdj-kIzK0bCG6taE-CLZPVvq?usp=sharing)
Or:
```python
from datasets import load_dataset
dataset = load_dataset('jordiae/exebench', split='test_synth', trust_remote_code=True)
```
### Expected behavior
I expected only the `test_synth` split to be downloaded and processed.
### Environment info
- `datasets` version: 3.5.1
- Platform: Linux-6.1.123+-x86_64-with-glibc2.35
- Python version: 3.11.12
- `huggingface_hub` version: 0.30.2
- PyArrow version: 18.1.0
- Pandas version: 2.2.2
- `fsspec` version: 2025.3.0
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/50171988?v=4",
"events_url": "https://api.github.com/users/sei-eschwartz/events{/privacy}",
"followers_url": "https://api.github.com/users/sei-eschwartz/followers",
"following_url": "https://api.github.com/users/sei-eschwartz/following{/other_user}",
"gists_url": "https://api.github.com/users/sei-eschwartz/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sei-eschwartz",
"id": 50171988,
"login": "sei-eschwartz",
"node_id": "MDQ6VXNlcjUwMTcxOTg4",
"organizations_url": "https://api.github.com/users/sei-eschwartz/orgs",
"received_events_url": "https://api.github.com/users/sei-eschwartz/received_events",
"repos_url": "https://api.github.com/users/sei-eschwartz/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sei-eschwartz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sei-eschwartz/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sei-eschwartz",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7554/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7554/timeline
| null |
duplicate
| null | null | 24.168333 | 150 |
https://api.github.com/repos/huggingface/datasets/issues/7546
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7546/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7546/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7546/events
|
https://github.com/huggingface/datasets/issues/7546
| 3,034,018,298 |
I_kwDODunzps6013H6
| 7,546 |
Large memory use when loading large datasets to a ZFS pool
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/6875946?v=4",
"events_url": "https://api.github.com/users/FredHaa/events{/privacy}",
"followers_url": "https://api.github.com/users/FredHaa/followers",
"following_url": "https://api.github.com/users/FredHaa/following{/other_user}",
"gists_url": "https://api.github.com/users/FredHaa/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/FredHaa",
"id": 6875946,
"login": "FredHaa",
"node_id": "MDQ6VXNlcjY4NzU5NDY=",
"organizations_url": "https://api.github.com/users/FredHaa/orgs",
"received_events_url": "https://api.github.com/users/FredHaa/received_events",
"repos_url": "https://api.github.com/users/FredHaa/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/FredHaa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/FredHaa/subscriptions",
"type": "User",
"url": "https://api.github.com/users/FredHaa",
"user_view_type": "public"
}
|
[] |
closed
| false | null |
[] | null |
[
"Hi ! datasets are memory mapped from disk, so they don't fill out your RAM. Not sure what's the source of your memory issue.\n\nWhat kind of system are you using ? and what kind of disk ?",
"Well, the fact of the matter is that my RAM is getting filled out by running the given example, as shown in [this video](https://streamable.com/usb0ql).\n\nMy system is a GPU server running Ubuntu. The disk is a SATA SSD attached to the server using a backplane. It is formatted with ZFS, mounted in /cache, and my HF_HOME is set to /cache/hf\n\nI really need this fixed, so I am more than willing to test out various suggestions you might have, or write a PR if we can figure out what is going on.",
"I'm not super familiar with ZFS, but it looks like it loads the data in memory when the files are memory mapped, which is an issue.\n\nMaybe it's a caching mechanism ? Since `datasets` accesses every memory mapped file to read a small part (the metadata of the arrow record batches), maybe ZFS brings the whole files in memory for quicker subsequent reads. This is an antipattern when it comes to lazy loading datasets of that size though",
"This is the answer.\n\nI tried changing my HF_HOME to an NFS share, and no RAM is then consumed loading the dataset.\n\nI will try to see if I can find a way to configure the ZFS pool to not cache the files (disabling the ARC/primary cache didn't work), and if I do write the solution in this issue. If I can't I guess I have to reformat my cache drive."
] | 2025-05-01T14:43:47 | 2025-05-13T13:30:09 | 2025-05-13T13:29:53 |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
When I load large parquet based datasets from the hub like `MLCommons/peoples_speech` using `load_dataset`, all my memory (500GB) is used and isn't released after loading, meaning that the process is terminated by the kernel if I try to load an additional dataset. This makes it impossible to train models using multiple large datasets.
### Steps to reproduce the bug
`uv run --with datasets==3.5.1 python`
```python
from datasets import load_dataset
load_dataset('MLCommons/peoples_speech', 'clean')
load_dataset('mozilla-foundation/common_voice_17_0', 'en')
```
### Expected behavior
I would expect that a lot less than 500GB of RAM would be required to load the dataset, or at least that the RAM usage would be cleared as soon as the dataset is loaded (and thus reside as a memory mapped file) such that other datasets can be loaded.
### Environment info
I am currently using the latest datasets==3.5.1 but I have had the same problem with multiple other versions.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/6875946?v=4",
"events_url": "https://api.github.com/users/FredHaa/events{/privacy}",
"followers_url": "https://api.github.com/users/FredHaa/followers",
"following_url": "https://api.github.com/users/FredHaa/following{/other_user}",
"gists_url": "https://api.github.com/users/FredHaa/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/FredHaa",
"id": 6875946,
"login": "FredHaa",
"node_id": "MDQ6VXNlcjY4NzU5NDY=",
"organizations_url": "https://api.github.com/users/FredHaa/orgs",
"received_events_url": "https://api.github.com/users/FredHaa/received_events",
"repos_url": "https://api.github.com/users/FredHaa/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/FredHaa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/FredHaa/subscriptions",
"type": "User",
"url": "https://api.github.com/users/FredHaa",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7546/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7546/timeline
| null |
completed
| null | null | 286.768333 | 158 |
https://api.github.com/repos/huggingface/datasets/issues/7543
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7543/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7543/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7543/events
|
https://github.com/huggingface/datasets/issues/7543
| 3,026,867,706 |
I_kwDODunzps60alX6
| 7,543 |
The memory-disk mapping failure issue of the map function(resolved, but there are some suggestions.)
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/76415358?v=4",
"events_url": "https://api.github.com/users/jxma20/events{/privacy}",
"followers_url": "https://api.github.com/users/jxma20/followers",
"following_url": "https://api.github.com/users/jxma20/following{/other_user}",
"gists_url": "https://api.github.com/users/jxma20/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jxma20",
"id": 76415358,
"login": "jxma20",
"node_id": "MDQ6VXNlcjc2NDE1MzU4",
"organizations_url": "https://api.github.com/users/jxma20/orgs",
"received_events_url": "https://api.github.com/users/jxma20/received_events",
"repos_url": "https://api.github.com/users/jxma20/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jxma20/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jxma20/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jxma20",
"user_view_type": "public"
}
|
[] |
closed
| false | null |
[] | null |
[] | 2025-04-29T03:04:59 | 2025-04-30T02:22:17 | 2025-04-30T02:22:17 |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
## bug
When the map function processes a large dataset, it temporarily stores the data in a cache file on the disk. After the data is stored, the memory occupied by it is released. Therefore, when using the map function to process a large-scale dataset, only a dataset space of the size of `writer_batch_size` will be occupied in memory.
However, I found that the map function does not actually reduce memory usage when I used it. At first, I thought there was a bug in the program, causing a memory leak—meaning the memory was not released after the data was stored in the cache. But later, I used a Linux command to check for recently modified files during program execution and found that no new files were created or modified. This indicates that the program did not store the dataset in the disk cache.
## bug solved
After modifying the parameters of the map function multiple times, I discovered the `cache_file_name` parameter. By changing it, the cache file can be stored in the specified directory. After making this change, I noticed that the cache file appeared. Initially, I found this quite incredible, but then I wondered if the cache file might have failed to be stored in a certain folder. This could be related to the fact that I don't have root privileges.
So, I delved into the source code of the map function to find out where the cache file would be stored by default. Eventually, I found the function `def _get_cache_file_path(self, fingerprint):`, which automatically generates the storage path for the cache file. The output was as follows: `/tmp/hf_datasets-j5qco9ug/cache-f2830487643b9cc2.arrow`. My hypothesis was confirmed: the lack of root privileges indeed prevented the cache file from being stored, which in turn prevented the release of memory. Therefore, changing the storage location to a folder where I have write access resolved the issue.
### Steps to reproduce the bug
my code
`train_data = train_data.map(process_fun, remove_columns=['image_name', 'question_type', 'concern', 'question', 'candidate_answers', 'answer'])`
### Expected behavior
Although my bug has been resolved, it still took me nearly a week to search for relevant information and debug the program. However, if a warning or error message about insufficient cache file write permissions could be provided during program execution, I might have been able to identify the cause more quickly. Therefore, I hope this aspect can be improved. I am documenting this bug here so that friends who encounter similar issues can solve their problems in a timely manner.
### Environment info
python: 3.10.15
datasets: 3.5.0
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/76415358?v=4",
"events_url": "https://api.github.com/users/jxma20/events{/privacy}",
"followers_url": "https://api.github.com/users/jxma20/followers",
"following_url": "https://api.github.com/users/jxma20/following{/other_user}",
"gists_url": "https://api.github.com/users/jxma20/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jxma20",
"id": 76415358,
"login": "jxma20",
"node_id": "MDQ6VXNlcjc2NDE1MzU4",
"organizations_url": "https://api.github.com/users/jxma20/orgs",
"received_events_url": "https://api.github.com/users/jxma20/received_events",
"repos_url": "https://api.github.com/users/jxma20/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jxma20/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jxma20/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jxma20",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7543/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7543/timeline
| null |
completed
| null | null | 23.288333 | 161 |
https://api.github.com/repos/huggingface/datasets/issues/7538
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7538/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7538/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7538/events
|
https://github.com/huggingface/datasets/issues/7538
| 3,023,280,056 |
I_kwDODunzps60M5e4
| 7,538 |
`IterableDataset` drops samples when resuming from a checkpoint
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
closed
| false | null |
[] | null |
[
"Thanks for reporting ! I fixed the issue using RebatchedArrowExamplesIterable before the formatted iterable"
] | 2025-04-27T19:34:49 | 2025-05-06T14:04:05 | 2025-05-06T14:03:42 |
COLLABORATOR
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
When resuming from a checkpoint, `IterableDataset` will drop samples if `num_shards % world_size == 0` and the underlying example supports `iter_arrow` and needs to be formatted.
In that case, the `FormattedExamplesIterable` fetches a batch of samples from the child iterable's `iter_arrow` and yields them one by one (after formatting). However, the child increments the `shard_example_idx` counter (in its `iter_arrow`) before returning the batch for the whole batch size, which leads to a portion of samples being skipped if the iteration (of the parent iterable) is stopped mid-batch.
Perhaps one way to avoid this would be by signalling the child iterable which samples (within the chunk) are processed by the parent and which are not, so that it can adjust the `shard_example_idx` counter accordingly. This would also mean the chunk needs to be sliced when resuming, but this is straightforward to implement.
The following is a minimal reproducer of the bug:
```python
from datasets import Dataset
from datasets.distributed import split_dataset_by_node
ds = Dataset.from_dict({"n": list(range(24))})
ds = ds.to_iterable_dataset(num_shards=4)
world_size = 4
rank = 0
ds_rank = split_dataset_by_node(ds, rank, world_size)
it = iter(ds_rank)
examples = []
for idx, example in enumerate(it):
examples.append(example)
if idx == 2:
state_dict = ds_rank.state_dict()
break
ds_rank.load_state_dict(state_dict)
it_resumed = iter(ds_rank)
examples_resumed = examples[:]
for example in it:
examples.append(example)
for example in it_resumed:
examples_resumed.append(example)
print("ORIGINAL ITER EXAMPLES:", examples)
print("RESUMED ITER EXAMPLES:", examples_resumed)
```
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7538/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7538/timeline
| null |
completed
| null | null | 210.481389 | 166 |
https://api.github.com/repos/huggingface/datasets/issues/7536
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7536/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7536/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7536/events
|
https://github.com/huggingface/datasets/issues/7536
| 3,018,425,549 |
I_kwDODunzps6z6YTN
| 7,536 |
[Errno 13] Permission denied: on `.incomplete` file
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/1282383?v=4",
"events_url": "https://api.github.com/users/ryan-clancy/events{/privacy}",
"followers_url": "https://api.github.com/users/ryan-clancy/followers",
"following_url": "https://api.github.com/users/ryan-clancy/following{/other_user}",
"gists_url": "https://api.github.com/users/ryan-clancy/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ryan-clancy",
"id": 1282383,
"login": "ryan-clancy",
"node_id": "MDQ6VXNlcjEyODIzODM=",
"organizations_url": "https://api.github.com/users/ryan-clancy/orgs",
"received_events_url": "https://api.github.com/users/ryan-clancy/received_events",
"repos_url": "https://api.github.com/users/ryan-clancy/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ryan-clancy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ryan-clancy/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ryan-clancy",
"user_view_type": "public"
}
|
[] |
closed
| false | null |
[] | null |
[
"It must be an issue with umask being used by multiple threads indeed. Maybe we can try to make a thread safe function to apply the umask (using filelock for example)",
"> It must be an issue with umask being used by multiple threads indeed. Maybe we can try to make a thread safe function to apply the umask (using filelock for example)\n\n@lhoestq is this something which can go in a 3.5.1 release?",
"Yes for sure",
"@lhoestq - can you take a look at https://github.com/huggingface/datasets/pull/7547/?"
] | 2025-04-24T20:52:45 | 2025-05-06T13:05:01 | 2025-05-06T13:05:01 |
CONTRIBUTOR
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
When downloading a dataset, we frequently hit the below Permission Denied error. This looks to happen (at least) across datasets in HF, S3, and GCS.
It looks like the `temp_file` being passed [here](https://github.com/huggingface/datasets/blob/main/src/datasets/utils/file_utils.py#L412) can sometimes be created with `000` permissions leading to the permission denied error (the user running the code is still the owner of the file). Deleting that particular file and re-running the code with 0 changes will usually succeed.
Is there some race condition happening with the [umask](https://github.com/huggingface/datasets/blob/main/src/datasets/utils/file_utils.py#L416), which is process global, and the [file creation](https://github.com/huggingface/datasets/blob/main/src/datasets/utils/file_utils.py#L404)?
```
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
.venv/lib/python3.12/site-packages/datasets/load.py:2084: in load_dataset
builder_instance.download_and_prepare(
.venv/lib/python3.12/site-packages/datasets/builder.py:925: in download_and_prepare
self._download_and_prepare(
.venv/lib/python3.12/site-packages/datasets/builder.py:1649: in _download_and_prepare
super()._download_and_prepare(
.venv/lib/python3.12/site-packages/datasets/builder.py:979: in _download_and_prepare
split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
.venv/lib/python3.12/site-packages/datasets/packaged_modules/folder_based_builder/folder_based_builder.py:120: in _split_generators
downloaded_files = dl_manager.download(files)
.venv/lib/python3.12/site-packages/datasets/download/download_manager.py:159: in download
downloaded_path_or_paths = map_nested(
.venv/lib/python3.12/site-packages/datasets/utils/py_utils.py:514: in map_nested
_single_map_nested((function, obj, batched, batch_size, types, None, True, None))
.venv/lib/python3.12/site-packages/datasets/utils/py_utils.py:382: in _single_map_nested
return [mapped_item for batch in iter_batched(data_struct, batch_size) for mapped_item in function(batch)]
.venv/lib/python3.12/site-packages/datasets/download/download_manager.py:206: in _download_batched
return thread_map(
.venv/lib/python3.12/site-packages/tqdm/contrib/concurrent.py:69: in thread_map
return _executor_map(ThreadPoolExecutor, fn, *iterables, **tqdm_kwargs)
.venv/lib/python3.12/site-packages/tqdm/contrib/concurrent.py:51: in _executor_map
return list(tqdm_class(ex.map(fn, *iterables, chunksize=chunksize), **kwargs))
.venv/lib/python3.12/site-packages/tqdm/std.py:1181: in __iter__
for obj in iterable:
../../../_tool/Python/3.12.10/x64/lib/python3.12/concurrent/futures/_base.py:619: in result_iterator
yield _result_or_cancel(fs.pop())
../../../_tool/Python/3.12.10/x64/lib/python3.12/concurrent/futures/_base.py:317: in _result_or_cancel
return fut.result(timeout)
../../../_tool/Python/3.12.10/x64/lib/python3.12/concurrent/futures/_base.py:449: in result
return self.__get_result()
../../../_tool/Python/3.12.10/x64/lib/python3.12/concurrent/futures/_base.py:401: in __get_result
raise self._exception
../../../_tool/Python/3.12.10/x64/lib/python3.12/concurrent/futures/thread.py:59: in run
result = self.fn(*self.args, **self.kwargs)
.venv/lib/python3.12/site-packages/datasets/download/download_manager.py:229: in _download_single
out = cached_path(url_or_filename, download_config=download_config)
.venv/lib/python3.12/site-packages/datasets/utils/file_utils.py:206: in cached_path
output_path = get_from_cache(
.venv/lib/python3.12/site-packages/datasets/utils/file_utils.py:412: in get_from_cache
fsspec_get(url, temp_file, storage_options=storage_options, desc=download_desc, disable_tqdm=disable_tqdm)
.venv/lib/python3.12/site-packages/datasets/utils/file_utils.py:331: in fsspec_get
fs.get_file(path, temp_file.name, callback=callback)
.venv/lib/python3.12/site-packages/fsspec/asyn.py:118: in wrapper
return sync(self.loop, func, *args, **kwargs)
.venv/lib/python3.12/site-packages/fsspec/asyn.py:103: in sync
raise return_result
.venv/lib/python3.12/site-packages/fsspec/asyn.py:56: in _runner
result[0] = await coro
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <s3fs.core.S3FileSystem object at 0x7f27c18b2e70>
rpath = '<my-bucket>/<my-prefix>/img_1.jpg'
lpath = '/home/runner/_work/_temp/hf_cache/downloads/6c97983efa4e24e534557724655df8247a0bd04326cdfc4a95b638c11e78222d.incomplete'
callback = <datasets.utils.file_utils.TqdmCallback object at 0x7f27c00cdbe0>
version_id = None, kwargs = {}
_open_file = <function S3FileSystem._get_file.<locals>._open_file at 0x7f27628d1120>
body = <StreamingBody at 0x7f276344fa80 for ClientResponse at 0x7f27c015fce0>
content_length = 521923, failed_reads = 0, bytes_read = 0
async def _get_file(
self, rpath, lpath, callback=_DEFAULT_CALLBACK, version_id=None, **kwargs
):
if os.path.isdir(lpath):
return
bucket, key, vers = self.split_path(rpath)
async def _open_file(range: int):
kw = self.req_kw.copy()
if range:
kw["Range"] = f"bytes={range}-"
resp = await self._call_s3(
"get_object",
Bucket=bucket,
Key=key,
**version_id_kw(version_id or vers),
**kw,
)
return resp["Body"], resp.get("ContentLength", None)
body, content_length = await _open_file(range=0)
callback.set_size(content_length)
failed_reads = 0
bytes_read = 0
try:
> with open(lpath, "wb") as f0:
E PermissionError: [Errno 13] Permission denied: '/home/runner/_work/_temp/hf_cache/downloads/6c97983efa4e24e534557724655df8247a0bd04326cdfc4a95b638c11e78222d.incomplete'
.venv/lib/python3.12/site-packages/s3fs/core.py:1355: PermissionError
```
### Steps to reproduce the bug
I believe this is a race condition and cannot reliably re-produce it, but it happens fairly frequently in our GitHub Actions tests and can also be re-produced (with lesser frequency) on cloud VMs.
### Expected behavior
The dataset loads properly with no permission denied error.
### Environment info
- `datasets` version: 3.5.0
- Platform: Linux-5.10.0-34-cloud-amd64-x86_64-with-glibc2.31
- Python version: 3.12.10
- `huggingface_hub` version: 0.30.2
- PyArrow version: 19.0.1
- Pandas version: 2.2.3
- `fsspec` version: 2024.12.0
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7536/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7536/timeline
| null |
completed
| null | null | 280.204444 | 168 |
https://api.github.com/repos/huggingface/datasets/issues/7530
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7530/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7530/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7530/events
|
https://github.com/huggingface/datasets/issues/7530
| 3,007,452,499 |
I_kwDODunzps6zQhVT
| 7,530 |
How to solve "Spaces stuck in Building" problems
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4",
"events_url": "https://api.github.com/users/ghost/events{/privacy}",
"followers_url": "https://api.github.com/users/ghost/followers",
"following_url": "https://api.github.com/users/ghost/following{/other_user}",
"gists_url": "https://api.github.com/users/ghost/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ghost",
"id": 10137,
"login": "ghost",
"node_id": "MDQ6VXNlcjEwMTM3",
"organizations_url": "https://api.github.com/users/ghost/orgs",
"received_events_url": "https://api.github.com/users/ghost/received_events",
"repos_url": "https://api.github.com/users/ghost/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ghost/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ghost/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ghost",
"user_view_type": "public"
}
|
[] |
closed
| false | null |
[] | null |
[
"I'm facing the same issue—Space stuck in \"Building\" even after restart and Factory rebuild. Any fix?\n",
"> I'm facing the same issue—Space stuck in \"Building\" even after restart and Factory rebuild. Any fix?\n\nAlso see https://github.com/huggingface/huggingface_hub/issues/3019",
"I'm facing the same issue. The build fails with the same error, and restarting won't help. Is there a fix or ETA? "
] | 2025-04-21T03:08:38 | 2025-04-22T07:49:52 | 2025-04-22T07:49:52 |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
Public spaces may stuck in Building after restarting, error log as follows:
build error
Unexpected job error
ERROR: failed to push spaces-registry.huggingface.tech/spaces/*:cpu-*-*: unexpected status from HEAD request to https://spaces-registry.huggingface.tech/v2/spaces/*/manifests/cpu-*-*: 401 Unauthorized
### Steps to reproduce the bug
Restart space / Factory rebuild cannot avoid it
### Expected behavior
Fix this problem
### Environment info
no requirements.txt can still happen
python gradio spaces
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7530/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7530/timeline
| null |
completed
| null | null | 28.687222 | 174 |
https://api.github.com/repos/huggingface/datasets/issues/7517
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7517/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7517/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7517/events
|
https://github.com/huggingface/datasets/issues/7517
| 2,996,106,077 |
I_kwDODunzps6ylPNd
| 7,517 |
Image Feature in Datasets Library Fails to Handle bytearray Objects from Spark DataFrames
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/73196164?v=4",
"events_url": "https://api.github.com/users/giraffacarp/events{/privacy}",
"followers_url": "https://api.github.com/users/giraffacarp/followers",
"following_url": "https://api.github.com/users/giraffacarp/following{/other_user}",
"gists_url": "https://api.github.com/users/giraffacarp/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/giraffacarp",
"id": 73196164,
"login": "giraffacarp",
"node_id": "MDQ6VXNlcjczMTk2MTY0",
"organizations_url": "https://api.github.com/users/giraffacarp/orgs",
"received_events_url": "https://api.github.com/users/giraffacarp/received_events",
"repos_url": "https://api.github.com/users/giraffacarp/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/giraffacarp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/giraffacarp/subscriptions",
"type": "User",
"url": "https://api.github.com/users/giraffacarp",
"user_view_type": "public"
}
|
[] |
closed
| false |
{
"avatar_url": "https://avatars.githubusercontent.com/u/73196164?v=4",
"events_url": "https://api.github.com/users/giraffacarp/events{/privacy}",
"followers_url": "https://api.github.com/users/giraffacarp/followers",
"following_url": "https://api.github.com/users/giraffacarp/following{/other_user}",
"gists_url": "https://api.github.com/users/giraffacarp/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/giraffacarp",
"id": 73196164,
"login": "giraffacarp",
"node_id": "MDQ6VXNlcjczMTk2MTY0",
"organizations_url": "https://api.github.com/users/giraffacarp/orgs",
"received_events_url": "https://api.github.com/users/giraffacarp/received_events",
"repos_url": "https://api.github.com/users/giraffacarp/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/giraffacarp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/giraffacarp/subscriptions",
"type": "User",
"url": "https://api.github.com/users/giraffacarp",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/73196164?v=4",
"events_url": "https://api.github.com/users/giraffacarp/events{/privacy}",
"followers_url": "https://api.github.com/users/giraffacarp/followers",
"following_url": "https://api.github.com/users/giraffacarp/following{/other_user}",
"gists_url": "https://api.github.com/users/giraffacarp/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/giraffacarp",
"id": 73196164,
"login": "giraffacarp",
"node_id": "MDQ6VXNlcjczMTk2MTY0",
"organizations_url": "https://api.github.com/users/giraffacarp/orgs",
"received_events_url": "https://api.github.com/users/giraffacarp/received_events",
"repos_url": "https://api.github.com/users/giraffacarp/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/giraffacarp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/giraffacarp/subscriptions",
"type": "User",
"url": "https://api.github.com/users/giraffacarp",
"user_view_type": "public"
}
] | null |
[
"Hi ! The `Image()` type accepts either\n- a `bytes` object containing the image bytes\n- a `str` object containing the image path\n- a `PIL.Image` object\n\nbut it doesn't support `bytearray`, maybe you can convert to `bytes` beforehand ?",
"Hi @lhoestq, \nconverting to bytes is certainly possible and would work around the error. However, the core issue is that `Dataset` and `IterableDataset` behave differently with the features.\n\nI’d be happy to work on a fix for this issue.",
"I see, that's an issue indeed. Feel free to ping me if I can help with reviews or any guidance\n\nIf it can help, the code that takes a Spark DataFrame and iterates on the rows for `IterableDataset` is here: \n\nhttps://github.com/huggingface/datasets/blob/6a96bf313085d7538a999b929a550e14e1d406c9/src/datasets/packaged_modules/spark/spark.py#L49-L53",
"#self-assign"
] | 2025-04-15T11:29:17 | 2025-05-07T14:17:30 | 2025-05-07T14:17:30 |
CONTRIBUTOR
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
When using `IterableDataset.from_spark()` with a Spark DataFrame containing image data, the `Image` feature class fails to properly process this data type, causing an `AttributeError: 'bytearray' object has no attribute 'get'`
### Steps to reproduce the bug
1. Create a Spark DataFrame with a column containing image data as bytearray objects
2. Define a Feature schema with an Image feature
3. Create an IterableDataset using `IterableDataset.from_spark()`
4. Attempt to iterate through the dataset
```
from pyspark.sql import SparkSession
from datasets import Dataset, IterableDataset, Features, Image, Value
# initialize spark
spark = SparkSession.builder.appName("MinimalRepro").getOrCreate()
# create spark dataframe
data = [(0, open("image.png", "rb").read())]
df = spark.createDataFrame(data, "idx: int, image: binary")
# convert to dataset
features = Features({"idx": Value("int64"), "image": Image()})
ds = Dataset.from_spark(df, features=features)
ds_iter = IterableDataset.from_spark(df, features=features)
# iterate
print(next(iter(ds)))
print(next(iter(ds_iter)))
```
### Expected behavior
The features should work on `IterableDataset` the same way they work on `Dataset`
### Environment info
- `datasets` version: 3.5.0
- Platform: macOS-15.3.2-arm64-arm-64bit
- Python version: 3.12.7
- `huggingface_hub` version: 0.30.2
- PyArrow version: 18.1.0
- Pandas version: 2.2.3
- `fsspec` version: 2024.12.0
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7517/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7517/timeline
| null |
completed
| null | null | 530.803611 | 187 |
https://api.github.com/repos/huggingface/datasets/issues/7516
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7516/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7516/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7516/events
|
https://github.com/huggingface/datasets/issues/7516
| 2,995,780,283 |
I_kwDODunzps6yj_q7
| 7,516 |
unsloth/DeepSeek-R1-Distill-Qwen-32B server error
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/164353862?v=4",
"events_url": "https://api.github.com/users/Editor-1/events{/privacy}",
"followers_url": "https://api.github.com/users/Editor-1/followers",
"following_url": "https://api.github.com/users/Editor-1/following{/other_user}",
"gists_url": "https://api.github.com/users/Editor-1/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Editor-1",
"id": 164353862,
"login": "Editor-1",
"node_id": "U_kgDOCcvXRg",
"organizations_url": "https://api.github.com/users/Editor-1/orgs",
"received_events_url": "https://api.github.com/users/Editor-1/received_events",
"repos_url": "https://api.github.com/users/Editor-1/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Editor-1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Editor-1/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Editor-1",
"user_view_type": "public"
}
|
[] |
closed
| false | null |
[] | null |
[] | 2025-04-15T09:26:53 | 2025-04-15T09:57:26 | 2025-04-15T09:57:26 |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
hfhubhttperror: 500 server error: internal server error for url: https://huggingface.co/api/models/unsloth/deepseek-r1-distill-qwen-32b-bnb-4bit/commits/main (request id: root=1-67fe23fa-3a2150eb444c2a823c388579;de3aed68-c397-4da5-94d4-6565efd3b919) internal error - we're working hard to fix this as soon as possible!
### Steps to reproduce the bug
unsloth/DeepSeek-R1-Distill-Qwen-32B server error
### Expected behavior
Network repair
### Environment info
The web side is also unavailable
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/164353862?v=4",
"events_url": "https://api.github.com/users/Editor-1/events{/privacy}",
"followers_url": "https://api.github.com/users/Editor-1/followers",
"following_url": "https://api.github.com/users/Editor-1/following{/other_user}",
"gists_url": "https://api.github.com/users/Editor-1/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Editor-1",
"id": 164353862,
"login": "Editor-1",
"node_id": "U_kgDOCcvXRg",
"organizations_url": "https://api.github.com/users/Editor-1/orgs",
"received_events_url": "https://api.github.com/users/Editor-1/received_events",
"repos_url": "https://api.github.com/users/Editor-1/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Editor-1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Editor-1/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Editor-1",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7516/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7516/timeline
| null |
completed
| null | null | 0.509167 | 188 |
https://api.github.com/repos/huggingface/datasets/issues/7515
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7515/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7515/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7515/events
|
https://github.com/huggingface/datasets/issues/7515
| 2,995,082,418 |
I_kwDODunzps6yhVSy
| 7,515 |
`concatenate_datasets` does not preserve Pytorch format for IterableDataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/5140987?v=4",
"events_url": "https://api.github.com/users/francescorubbo/events{/privacy}",
"followers_url": "https://api.github.com/users/francescorubbo/followers",
"following_url": "https://api.github.com/users/francescorubbo/following{/other_user}",
"gists_url": "https://api.github.com/users/francescorubbo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/francescorubbo",
"id": 5140987,
"login": "francescorubbo",
"node_id": "MDQ6VXNlcjUxNDA5ODc=",
"organizations_url": "https://api.github.com/users/francescorubbo/orgs",
"received_events_url": "https://api.github.com/users/francescorubbo/received_events",
"repos_url": "https://api.github.com/users/francescorubbo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/francescorubbo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/francescorubbo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/francescorubbo",
"user_view_type": "public"
}
|
[] |
closed
| false | null |
[] | null |
[
"Hi ! Oh indeed it would be cool to return the same format in that case. Would you like to submit a PR ? The function that does the concatenation is here:\n\nhttps://github.com/huggingface/datasets/blob/90e5bf8a8599b625d6103ee5ac83b98269991141/src/datasets/iterable_dataset.py#L3375-L3380",
"Thank you for the pointer, @lhoestq ! See #7522 "
] | 2025-04-15T04:36:34 | 2025-05-19T15:07:38 | 2025-05-19T15:07:38 |
CONTRIBUTOR
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
When concatenating datasets with `concatenate_datasets`, I would expect the resulting combined dataset to be in the same format as the inputs (assuming it's consistent). This is indeed the behavior when combining `Dataset`, but not when combining `IterableDataset`. Specifically, when applying `concatenate_datasets` to a list of `IterableDataset` in Pytorch format (i.e. using `.with_format(Pytorch)`), the output `IterableDataset` is not in Pytorch format.
### Steps to reproduce the bug
```
import datasets
ds = datasets.Dataset.from_dict({"a": [1,2,3]})
iterable_ds = ds.to_iterable_dataset()
datasets.concatenate_datasets([ds.with_format("torch")]) # <- this preserves Pytorch format
datasets.concatenate_datasets([iterable_ds.with_format("torch")]) # <- this does NOT preserves Pytorch format
```
### Expected behavior
Pytorch format should be preserved when combining IterableDataset in Pytorch format.
### Environment info
datasets==3.5.0, Python 3.11.11, torch==2.2.2
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7515/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7515/timeline
| null |
completed
| null | null | 826.517778 | 189 |
https://api.github.com/repos/huggingface/datasets/issues/7502
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7502/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7502/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7502/events
|
https://github.com/huggingface/datasets/issues/7502
| 2,977,453,814 |
I_kwDODunzps6xeFb2
| 7,502 |
`load_dataset` of size 40GB creates a cache of >720GB
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/61748653?v=4",
"events_url": "https://api.github.com/users/pietrolesci/events{/privacy}",
"followers_url": "https://api.github.com/users/pietrolesci/followers",
"following_url": "https://api.github.com/users/pietrolesci/following{/other_user}",
"gists_url": "https://api.github.com/users/pietrolesci/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/pietrolesci",
"id": 61748653,
"login": "pietrolesci",
"node_id": "MDQ6VXNlcjYxNzQ4NjUz",
"organizations_url": "https://api.github.com/users/pietrolesci/orgs",
"received_events_url": "https://api.github.com/users/pietrolesci/received_events",
"repos_url": "https://api.github.com/users/pietrolesci/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/pietrolesci/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pietrolesci/subscriptions",
"type": "User",
"url": "https://api.github.com/users/pietrolesci",
"user_view_type": "public"
}
|
[] |
closed
| false | null |
[] | null |
[
"Hi ! Parquet is a compressed format. When you load a dataset, it uncompresses the Parquet data into Arrow data on your disk. That's why you can indeed end up with 720GB of uncompressed data on disk. The uncompression is needed to enable performant dataset objects (especially for random access).\n\nTo save some storage you can instead load the dataset with `streaming=True`. This way you get an `IterableDataset` that reads the Parquet data iteratively without ever writing to disk.\n\nPS: `ReadInstruction` might not be implemented for `streaming=True`, if it's the case you can use `ds.take()` and `ds.skip()` instead",
"Hi @lhoestq, thanks a lot for your answer. This makes perfect sense. I will try using the streaming mode. Closing the issue."
] | 2025-04-07T16:52:34 | 2025-04-15T15:22:12 | 2025-04-15T15:22:11 |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
Hi there,
I am trying to load a dataset from the Hugging Face Hub and split it into train and validation splits. Somehow, when I try to do it with `load_dataset`, it exhausts my disk quota. So, I tried manually downloading the parquet files from the hub and loading them as follows:
```python
ds = DatasetDict(
{
"train": load_dataset(
"parquet",
data_dir=f"{local_dir}/{tok}",
cache_dir=cache_dir,
num_proc=min(12, os.cpu_count()), # type: ignore
split=ReadInstruction("train", from_=0, to=NUM_TRAIN, unit="abs"), # type: ignore
),
"validation": load_dataset(
"parquet",
data_dir=f"{local_dir}/{tok}",
cache_dir=cache_dir,
num_proc=min(12, os.cpu_count()), # type: ignore
split=ReadInstruction("train", from_=NUM_TRAIN, unit="abs"), # type: ignore
)
}
)
```
which still strangely creates 720GB of cache. In addition, if I remove the raw parquet file folder (`f"{local_dir}/{tok}"` in this example), I am not able to load anything. So, I am left wondering what this cache is doing. Am I missing something? Is there a solution to this problem?
Thanks a lot in advance for your help!
A related issue: https://github.com/huggingface/transformers/issues/10204#issue-809007443.
---
Python: 3.11.11
datasets: 3.5.0
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/61748653?v=4",
"events_url": "https://api.github.com/users/pietrolesci/events{/privacy}",
"followers_url": "https://api.github.com/users/pietrolesci/followers",
"following_url": "https://api.github.com/users/pietrolesci/following{/other_user}",
"gists_url": "https://api.github.com/users/pietrolesci/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/pietrolesci",
"id": 61748653,
"login": "pietrolesci",
"node_id": "MDQ6VXNlcjYxNzQ4NjUz",
"organizations_url": "https://api.github.com/users/pietrolesci/orgs",
"received_events_url": "https://api.github.com/users/pietrolesci/received_events",
"repos_url": "https://api.github.com/users/pietrolesci/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/pietrolesci/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pietrolesci/subscriptions",
"type": "User",
"url": "https://api.github.com/users/pietrolesci",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7502/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7502/timeline
| null |
completed
| null | null | 190.493611 | 201 |
https://api.github.com/repos/huggingface/datasets/issues/7501
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7501/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7501/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7501/events
|
https://github.com/huggingface/datasets/issues/7501
| 2,976,721,014 |
I_kwDODunzps6xbSh2
| 7,501 |
Nested Feature raises ArrowNotImplementedError: Unsupported cast using function cast_struct
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/26623948?v=4",
"events_url": "https://api.github.com/users/yaner-here/events{/privacy}",
"followers_url": "https://api.github.com/users/yaner-here/followers",
"following_url": "https://api.github.com/users/yaner-here/following{/other_user}",
"gists_url": "https://api.github.com/users/yaner-here/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/yaner-here",
"id": 26623948,
"login": "yaner-here",
"node_id": "MDQ6VXNlcjI2NjIzOTQ4",
"organizations_url": "https://api.github.com/users/yaner-here/orgs",
"received_events_url": "https://api.github.com/users/yaner-here/received_events",
"repos_url": "https://api.github.com/users/yaner-here/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/yaner-here/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yaner-here/subscriptions",
"type": "User",
"url": "https://api.github.com/users/yaner-here",
"user_view_type": "public"
}
|
[] |
closed
| false | null |
[] | null |
[
"Solved by the default `load_dataset(features)` parameters. Do not use `Sequence` for the `list` in `list[any]` json schema, just simply use `[]`. For example, `\"b\": Sequence({...})` fails but `\"b\": [{...}]` works fine."
] | 2025-04-07T12:35:39 | 2025-04-07T12:43:04 | 2025-04-07T12:43:03 |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
`datasets.Features` seems to be unable to handle json file that contains fields of `list[dict]`.
### Steps to reproduce the bug
```json
// test.json
{"a": 1, "b": [{"c": 2, "d": 3}, {"c": 4, "d": 5}]}
{"a": 5, "b": [{"c": 7, "d": 8}, {"c": 9, "d": 10}]}
```
```python
import json
from datasets import Dataset, Features, Value, Sequence, load_dataset
annotation_feature = Features({
"a": Value("int32"),
"b": Sequence({
"c": Value("int32"),
"d": Value("int32"),
}),
})
annotation_dataset = load_dataset(
"json",
data_files="test.json",
features=annotation_feature
)
```
```
ArrowNotImplementedError: Unsupported cast from list<item: struct<c: int32, d: int32>> to struct using function cast_struct
The above exception was the direct cause of the following exception:
DatasetGenerationError Traceback (most recent call last)
Cell In[46], line 11
2 from datasets import Dataset, Features, Value, Sequence, load_dataset
4 annotation_feature = Features({
5 "a": Value("int32"),
6 "b": Sequence({
(...) 9 }),
10 })
---> 11 annotation_dataset = load_dataset(
12 "json",
13 data_files="test.json",
14 features=annotation_feature
15 )
```
### Expected behavior
A `datasets.Datasets` instance should be initialized.
### Environment info
- `datasets` version: 3.5.0
- Platform: Linux-6.11.0-21-generic-x86_64-with-glibc2.39
- Python version: 3.11.11
- `huggingface_hub` version: 0.30.1
- PyArrow version: 19.0.1
- Pandas version: 2.2.3
- `fsspec` version: 2024.12.0
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/26623948?v=4",
"events_url": "https://api.github.com/users/yaner-here/events{/privacy}",
"followers_url": "https://api.github.com/users/yaner-here/followers",
"following_url": "https://api.github.com/users/yaner-here/following{/other_user}",
"gists_url": "https://api.github.com/users/yaner-here/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/yaner-here",
"id": 26623948,
"login": "yaner-here",
"node_id": "MDQ6VXNlcjI2NjIzOTQ4",
"organizations_url": "https://api.github.com/users/yaner-here/orgs",
"received_events_url": "https://api.github.com/users/yaner-here/received_events",
"repos_url": "https://api.github.com/users/yaner-here/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/yaner-here/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yaner-here/subscriptions",
"type": "User",
"url": "https://api.github.com/users/yaner-here",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7501/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7501/timeline
| null |
completed
| null | null | 0.123333 | 202 |
https://api.github.com/repos/huggingface/datasets/issues/7494
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7494/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7494/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7494/events
|
https://github.com/huggingface/datasets/issues/7494
| 2,965,347,685 |
I_kwDODunzps6wv51l
| 7,494 |
Broken links in pdf loading documentation
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/75789232?v=4",
"events_url": "https://api.github.com/users/VyoJ/events{/privacy}",
"followers_url": "https://api.github.com/users/VyoJ/followers",
"following_url": "https://api.github.com/users/VyoJ/following{/other_user}",
"gists_url": "https://api.github.com/users/VyoJ/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/VyoJ",
"id": 75789232,
"login": "VyoJ",
"node_id": "MDQ6VXNlcjc1Nzg5MjMy",
"organizations_url": "https://api.github.com/users/VyoJ/orgs",
"received_events_url": "https://api.github.com/users/VyoJ/received_events",
"repos_url": "https://api.github.com/users/VyoJ/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/VyoJ/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/VyoJ/subscriptions",
"type": "User",
"url": "https://api.github.com/users/VyoJ",
"user_view_type": "public"
}
|
[] |
closed
| false | null |
[] | null |
[
"thanks for reporting ! I fixed the links, the docs will be updated in the next release"
] | 2025-04-02T06:45:22 | 2025-04-15T13:36:25 | 2025-04-15T13:36:04 |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
Hi, just a couple of small issues I ran into while reading the docs for [loading pdf data](https://huggingface.co/docs/datasets/main/en/document_load):
1. The link for the [`Create a pdf dataset`](https://huggingface.co/docs/datasets/main/en/document_load#pdffolder) points to https://huggingface.co/docs/datasets/main/en/pdf_dataset instead of https://huggingface.co/docs/datasets/main/en/document_dataset and hence gives a 404 error.
2. At the top of the page, it's mentioned that to work with pdf datasets we need to have the `pdfplumber` package installed but the link to its installation guide points to `pytorch/vision` [installation instructions](https://github.com/pytorch/vision#installation) instead of `pdfplumber`'s [guide](https://github.com/jsvine/pdfplumber#installation)
I love the work on enabling pdf dataset support and these small tweaks would help everyone navigate the docs better. Thanks!
### Steps to reproduce the bug
The issue is on the [Load Document Data](https://huggingface.co/docs/datasets/main/en/document_load) page of the datasets docs.
### Expected behavior
1. For solving the first issue, I went through the [source .mdx code](https://github.com/huggingface/datasets/blob/main/docs/source/document_load.mdx?plain=1#L188) of the datasets docs and found that the link is pointing to `./pdf_dataset` instead of `./document_dataset`
2. For the second issue, I went through the [source .mdx code](https://github.com/huggingface/datasets/blob/main/docs/source/document_load.mdx?plain=1#L13) of the datasets docs and found that the link is `pytorch/vision` [installation instructions](https://github.com/pytorch/vision#installation) instead of `pdfplumber`'s [guide](https://github.com/jsvine/pdfplumber#installation)
Just replacing these two links should fix the bugs
### Environment info
datasets v3.5.0 (main at the time of writing)
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7494/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7494/timeline
| null |
completed
| null | null | 318.845 | 209 |
https://api.github.com/repos/huggingface/datasets/issues/7486
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7486/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7486/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7486/events
|
https://github.com/huggingface/datasets/issues/7486
| 2,954,042,179 |
I_kwDODunzps6wExtD
| 7,486 |
`shared_datadir` fixture is missing
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/1289205?v=4",
"events_url": "https://api.github.com/users/lahwaacz/events{/privacy}",
"followers_url": "https://api.github.com/users/lahwaacz/followers",
"following_url": "https://api.github.com/users/lahwaacz/following{/other_user}",
"gists_url": "https://api.github.com/users/lahwaacz/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lahwaacz",
"id": 1289205,
"login": "lahwaacz",
"node_id": "MDQ6VXNlcjEyODkyMDU=",
"organizations_url": "https://api.github.com/users/lahwaacz/orgs",
"received_events_url": "https://api.github.com/users/lahwaacz/received_events",
"repos_url": "https://api.github.com/users/lahwaacz/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lahwaacz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lahwaacz/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lahwaacz",
"user_view_type": "public"
}
|
[] |
closed
| false | null |
[] | null |
[
"OK I was missing the `pytest-datadir` package. Sorry for the noise!"
] | 2025-03-27T18:17:12 | 2025-03-27T19:49:11 | 2025-03-27T19:49:10 |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
Running the tests for the latest release fails due to missing `shared_datadir` fixture.
### Steps to reproduce the bug
Running `pytest` while building a package for Arch Linux leads to these errors:
```
==================================== ERRORS ====================================
_________ ERROR at setup of test_pdf_feature_encode_example[<lambda>1] _________
[gw44] linux -- Python 3.13.2 /build/python-datasets/src/datasets-3.5.0/test-env/bin/python
file /build/python-datasets/src/datasets-3.5.0/tests/features/test_pdf.py, line 8
@require_pdfplumber
@pytest.mark.parametrize(
"build_example",
[
lambda pdf_path: pdf_path,
lambda pdf_path: open(pdf_path, "rb").read(),
lambda pdf_path: {"path": pdf_path},
lambda pdf_path: {"path": pdf_path, "bytes": None},
lambda pdf_path: {"path": pdf_path, "bytes": open(pdf_path, "rb").read()},
lambda pdf_path: {"path": None, "bytes": open(pdf_path, "rb").read()},
lambda pdf_path: {"bytes": open(pdf_path, "rb").read()},
],
)
def test_pdf_feature_encode_example(shared_datadir, build_example):
E fixture 'shared_datadir' not found
> available fixtures: _hf_gated_dataset_repo_txt_data, arrow_file, arrow_path, audio_file, bz2_csv_path, bz2_file, cache, capfd, capfdbinary, caplog, capsys, capsysbinary, ci_hfh_hf_hub_url, ci_hub_config, cleanup_repo, csv2_path, csv_path, data_dir_with_hidden_files, dataset, dataset_dict, disable_implicit_token, disable_tqdm_output, doctest_namespace, geoparquet_path, gz_file, hf_api, hf_gated_dataset_repo_txt_data, hf_private_dataset_repo_txt_data, hf_private_dataset_repo_txt_data_, hf_private_dataset_repo_zipped_img_data, hf_private_dataset_repo_zipped_img_data_, hf_private_dataset_repo_zipped_txt_data, hf_private_dataset_repo_zipped_txt_data_, hf_token, image_file, json_dict_of_lists_path, json_list_of_dicts_path, jsonl2_path, jsonl_312_path, jsonl_gz_path, jsonl_path, jsonl_str_path, lz4_file, mock_fsspec, mockfs, monkeypatch, parquet_path, pytestconfig, record_property, record_testsuite_property, record_xml_attribute, recwarn, set_ci_hub_access_token, set_sqlalchemy_silence_uber_warning, set_test_cache_config, set_update_download_counts_to_false, seven_zip_file, sqlite_path, tar_file, tar_jsonl_path, tar_nested_jsonl_path, temporary_repo, tensor_file, testrun_uid, text2_path, text_dir, text_dir_with_unsupported_extension, text_file, text_file_content, text_gz_path, text_path, text_path_with_unicode_new_lines, tmp_path, tmp_path_factory, tmpdir, tmpdir_factory, tmpfs, worker_id, xml_file, xz_file, zero_time_out_for_remote_code, zip_csv_path, zip_csv_with_dir_path, zip_file, zip_image_path, zip_jsonl_path, zip_jsonl_with_dir_path, zip_nested_jsonl_path, zip_text_path, zip_text_with_dir_path, zip_unsupported_ext_path, zip_uppercase_csv_path, zstd_file
> use 'pytest --fixtures [testpath]' for help on them.
/build/python-datasets/src/datasets-3.5.0/tests/features/test_pdf.py:8
_________ ERROR at setup of test_pdf_feature_encode_example[<lambda>2] _________
[gw44] linux -- Python 3.13.2 /build/python-datasets/src/datasets-3.5.0/test-env/bin/python
file /build/python-datasets/src/datasets-3.5.0/tests/features/test_pdf.py, line 8
@require_pdfplumber
@pytest.mark.parametrize(
"build_example",
[
lambda pdf_path: pdf_path,
lambda pdf_path: open(pdf_path, "rb").read(),
lambda pdf_path: {"path": pdf_path},
lambda pdf_path: {"path": pdf_path, "bytes": None},
lambda pdf_path: {"path": pdf_path, "bytes": open(pdf_path, "rb").read()},
lambda pdf_path: {"path": None, "bytes": open(pdf_path, "rb").read()},
lambda pdf_path: {"bytes": open(pdf_path, "rb").read()},
],
)
def test_pdf_feature_encode_example(shared_datadir, build_example):
E fixture 'shared_datadir' not found
> available fixtures: _hf_gated_dataset_repo_txt_data, arrow_file, arrow_path, audio_file, bz2_csv_path, bz2_file, cache, capfd, capfdbinary, caplog, capsys, capsysbinary, ci_hfh_hf_hub_url, ci_hub_config, cleanup_repo, csv2_path, csv_path, data_dir_with_hidden_files, dataset, dataset_dict, disable_implicit_token, disable_tqdm_output, doctest_namespace, geoparquet_path, gz_file, hf_api, hf_gated_dataset_repo_txt_data, hf_private_dataset_repo_txt_data, hf_private_dataset_repo_txt_data_, hf_private_dataset_repo_zipped_img_data, hf_private_dataset_repo_zipped_img_data_, hf_private_dataset_repo_zipped_txt_data, hf_private_dataset_repo_zipped_txt_data_, hf_token, image_file, json_dict_of_lists_path, json_list_of_dicts_path, jsonl2_path, jsonl_312_path, jsonl_gz_path, jsonl_path, jsonl_str_path, lz4_file, mock_fsspec, mockfs, monkeypatch, parquet_path, pytestconfig, record_property, record_testsuite_property, record_xml_attribute, recwarn, set_ci_hub_access_token, set_sqlalchemy_silence_uber_warning, set_test_cache_config, set_update_download_counts_to_false, seven_zip_file, sqlite_path, tar_file, tar_jsonl_path, tar_nested_jsonl_path, temporary_repo, tensor_file, testrun_uid, text2_path, text_dir, text_dir_with_unsupported_extension, text_file, text_file_content, text_gz_path, text_path, text_path_with_unicode_new_lines, tmp_path, tmp_path_factory, tmpdir, tmpdir_factory, tmpfs, worker_id, xml_file, xz_file, zero_time_out_for_remote_code, zip_csv_path, zip_csv_with_dir_path, zip_file, zip_image_path, zip_jsonl_path, zip_jsonl_with_dir_path, zip_nested_jsonl_path, zip_text_path, zip_text_with_dir_path, zip_unsupported_ext_path, zip_uppercase_csv_path, zstd_file
> use 'pytest --fixtures [testpath]' for help on them.
/build/python-datasets/src/datasets-3.5.0/tests/features/test_pdf.py:8
_________ ERROR at setup of test_pdf_feature_encode_example[<lambda>3] _________
[gw44] linux -- Python 3.13.2 /build/python-datasets/src/datasets-3.5.0/test-env/bin/python
file /build/python-datasets/src/datasets-3.5.0/tests/features/test_pdf.py, line 8
@require_pdfplumber
@pytest.mark.parametrize(
"build_example",
[
lambda pdf_path: pdf_path,
lambda pdf_path: open(pdf_path, "rb").read(),
lambda pdf_path: {"path": pdf_path},
lambda pdf_path: {"path": pdf_path, "bytes": None},
lambda pdf_path: {"path": pdf_path, "bytes": open(pdf_path, "rb").read()},
lambda pdf_path: {"path": None, "bytes": open(pdf_path, "rb").read()},
lambda pdf_path: {"bytes": open(pdf_path, "rb").read()},
],
)
def test_pdf_feature_encode_example(shared_datadir, build_example):
E fixture 'shared_datadir' not found
> available fixtures: _hf_gated_dataset_repo_txt_data, arrow_file, arrow_path, audio_file, bz2_csv_path, bz2_file, cache, capfd, capfdbinary, caplog, capsys, capsysbinary, ci_hfh_hf_hub_url, ci_hub_config, cleanup_repo, csv2_path, csv_path, data_dir_with_hidden_files, dataset, dataset_dict, disable_implicit_token, disable_tqdm_output, doctest_namespace, geoparquet_path, gz_file, hf_api, hf_gated_dataset_repo_txt_data, hf_private_dataset_repo_txt_data, hf_private_dataset_repo_txt_data_, hf_private_dataset_repo_zipped_img_data, hf_private_dataset_repo_zipped_img_data_, hf_private_dataset_repo_zipped_txt_data, hf_private_dataset_repo_zipped_txt_data_, hf_token, image_file, json_dict_of_lists_path, json_list_of_dicts_path, jsonl2_path, jsonl_312_path, jsonl_gz_path, jsonl_path, jsonl_str_path, lz4_file, mock_fsspec, mockfs, monkeypatch, parquet_path, pytestconfig, record_property, record_testsuite_property, record_xml_attribute, recwarn, set_ci_hub_access_token, set_sqlalchemy_silence_uber_warning, set_test_cache_config, set_update_download_counts_to_false, seven_zip_file, sqlite_path, tar_file, tar_jsonl_path, tar_nested_jsonl_path, temporary_repo, tensor_file, testrun_uid, text2_path, text_dir, text_dir_with_unsupported_extension, text_file, text_file_content, text_gz_path, text_path, text_path_with_unicode_new_lines, tmp_path, tmp_path_factory, tmpdir, tmpdir_factory, tmpfs, worker_id, xml_file, xz_file, zero_time_out_for_remote_code, zip_csv_path, zip_csv_with_dir_path, zip_file, zip_image_path, zip_jsonl_path, zip_jsonl_with_dir_path, zip_nested_jsonl_path, zip_text_path, zip_text_with_dir_path, zip_unsupported_ext_path, zip_uppercase_csv_path, zstd_file
> use 'pytest --fixtures [testpath]' for help on them.
/build/python-datasets/src/datasets-3.5.0/tests/features/test_pdf.py:8
_________ ERROR at setup of test_pdf_feature_encode_example[<lambda>4] _________
[gw44] linux -- Python 3.13.2 /build/python-datasets/src/datasets-3.5.0/test-env/bin/python
file /build/python-datasets/src/datasets-3.5.0/tests/features/test_pdf.py, line 8
@require_pdfplumber
@pytest.mark.parametrize(
"build_example",
[
lambda pdf_path: pdf_path,
lambda pdf_path: open(pdf_path, "rb").read(),
lambda pdf_path: {"path": pdf_path},
lambda pdf_path: {"path": pdf_path, "bytes": None},
lambda pdf_path: {"path": pdf_path, "bytes": open(pdf_path, "rb").read()},
lambda pdf_path: {"path": None, "bytes": open(pdf_path, "rb").read()},
lambda pdf_path: {"bytes": open(pdf_path, "rb").read()},
],
)
def test_pdf_feature_encode_example(shared_datadir, build_example):
E fixture 'shared_datadir' not found
> available fixtures: _hf_gated_dataset_repo_txt_data, arrow_file, arrow_path, audio_file, bz2_csv_path, bz2_file, cache, capfd, capfdbinary, caplog, capsys, capsysbinary, ci_hfh_hf_hub_url, ci_hub_config, cleanup_repo, csv2_path, csv_path, data_dir_with_hidden_files, dataset, dataset_dict, disable_implicit_token, disable_tqdm_output, doctest_namespace, geoparquet_path, gz_file, hf_api, hf_gated_dataset_repo_txt_data, hf_private_dataset_repo_txt_data, hf_private_dataset_repo_txt_data_, hf_private_dataset_repo_zipped_img_data, hf_private_dataset_repo_zipped_img_data_, hf_private_dataset_repo_zipped_txt_data, hf_private_dataset_repo_zipped_txt_data_, hf_token, image_file, json_dict_of_lists_path, json_list_of_dicts_path, jsonl2_path, jsonl_312_path, jsonl_gz_path, jsonl_path, jsonl_str_path, lz4_file, mock_fsspec, mockfs, monkeypatch, parquet_path, pytestconfig, record_property, record_testsuite_property, record_xml_attribute, recwarn, set_ci_hub_access_token, set_sqlalchemy_silence_uber_warning, set_test_cache_config, set_update_download_counts_to_false, seven_zip_file, sqlite_path, tar_file, tar_jsonl_path, tar_nested_jsonl_path, temporary_repo, tensor_file, testrun_uid, text2_path, text_dir, text_dir_with_unsupported_extension, text_file, text_file_content, text_gz_path, text_path, text_path_with_unicode_new_lines, tmp_path, tmp_path_factory, tmpdir, tmpdir_factory, tmpfs, worker_id, xml_file, xz_file, zero_time_out_for_remote_code, zip_csv_path, zip_csv_with_dir_path, zip_file, zip_image_path, zip_jsonl_path, zip_jsonl_with_dir_path, zip_nested_jsonl_path, zip_text_path, zip_text_with_dir_path, zip_unsupported_ext_path, zip_uppercase_csv_path, zstd_file
> use 'pytest --fixtures [testpath]' for help on them.
/build/python-datasets/src/datasets-3.5.0/tests/features/test_pdf.py:8
_________ ERROR at setup of test_pdf_feature_encode_example[<lambda>5] _________
[gw44] linux -- Python 3.13.2 /build/python-datasets/src/datasets-3.5.0/test-env/bin/python
file /build/python-datasets/src/datasets-3.5.0/tests/features/test_pdf.py, line 8
@require_pdfplumber
@pytest.mark.parametrize(
"build_example",
[
lambda pdf_path: pdf_path,
lambda pdf_path: open(pdf_path, "rb").read(),
lambda pdf_path: {"path": pdf_path},
lambda pdf_path: {"path": pdf_path, "bytes": None},
lambda pdf_path: {"path": pdf_path, "bytes": open(pdf_path, "rb").read()},
lambda pdf_path: {"path": None, "bytes": open(pdf_path, "rb").read()},
lambda pdf_path: {"bytes": open(pdf_path, "rb").read()},
],
)
def test_pdf_feature_encode_example(shared_datadir, build_example):
E fixture 'shared_datadir' not found
> available fixtures: _hf_gated_dataset_repo_txt_data, arrow_file, arrow_path, audio_file, bz2_csv_path, bz2_file, cache, capfd, capfdbinary, caplog, capsys, capsysbinary, ci_hfh_hf_hub_url, ci_hub_config, cleanup_repo, csv2_path, csv_path, data_dir_with_hidden_files, dataset, dataset_dict, disable_implicit_token, disable_tqdm_output, doctest_namespace, geoparquet_path, gz_file, hf_api, hf_gated_dataset_repo_txt_data, hf_private_dataset_repo_txt_data, hf_private_dataset_repo_txt_data_, hf_private_dataset_repo_zipped_img_data, hf_private_dataset_repo_zipped_img_data_, hf_private_dataset_repo_zipped_txt_data, hf_private_dataset_repo_zipped_txt_data_, hf_token, image_file, json_dict_of_lists_path, json_list_of_dicts_path, jsonl2_path, jsonl_312_path, jsonl_gz_path, jsonl_path, jsonl_str_path, lz4_file, mock_fsspec, mockfs, monkeypatch, parquet_path, pytestconfig, record_property, record_testsuite_property, record_xml_attribute, recwarn, set_ci_hub_access_token, set_sqlalchemy_silence_uber_warning, set_test_cache_config, set_update_download_counts_to_false, seven_zip_file, sqlite_path, tar_file, tar_jsonl_path, tar_nested_jsonl_path, temporary_repo, tensor_file, testrun_uid, text2_path, text_dir, text_dir_with_unsupported_extension, text_file, text_file_content, text_gz_path, text_path, text_path_with_unicode_new_lines, tmp_path, tmp_path_factory, tmpdir, tmpdir_factory, tmpfs, worker_id, xml_file, xz_file, zero_time_out_for_remote_code, zip_csv_path, zip_csv_with_dir_path, zip_file, zip_image_path, zip_jsonl_path, zip_jsonl_with_dir_path, zip_nested_jsonl_path, zip_text_path, zip_text_with_dir_path, zip_unsupported_ext_path, zip_uppercase_csv_path, zstd_file
> use 'pytest --fixtures [testpath]' for help on them.
/build/python-datasets/src/datasets-3.5.0/tests/features/test_pdf.py:8
_________ ERROR at setup of test_pdf_feature_encode_example[<lambda>6] _________
[gw44] linux -- Python 3.13.2 /build/python-datasets/src/datasets-3.5.0/test-env/bin/python
file /build/python-datasets/src/datasets-3.5.0/tests/features/test_pdf.py, line 8
@require_pdfplumber
@pytest.mark.parametrize(
"build_example",
[
lambda pdf_path: pdf_path,
lambda pdf_path: open(pdf_path, "rb").read(),
lambda pdf_path: {"path": pdf_path},
lambda pdf_path: {"path": pdf_path, "bytes": None},
lambda pdf_path: {"path": pdf_path, "bytes": open(pdf_path, "rb").read()},
lambda pdf_path: {"path": None, "bytes": open(pdf_path, "rb").read()},
lambda pdf_path: {"bytes": open(pdf_path, "rb").read()},
],
)
def test_pdf_feature_encode_example(shared_datadir, build_example):
E fixture 'shared_datadir' not found
> available fixtures: _hf_gated_dataset_repo_txt_data, arrow_file, arrow_path, audio_file, bz2_csv_path, bz2_file, cache, capfd, capfdbinary, caplog, capsys, capsysbinary, ci_hfh_hf_hub_url, ci_hub_config, cleanup_repo, csv2_path, csv_path, data_dir_with_hidden_files, dataset, dataset_dict, disable_implicit_token, disable_tqdm_output, doctest_namespace, geoparquet_path, gz_file, hf_api, hf_gated_dataset_repo_txt_data, hf_private_dataset_repo_txt_data, hf_private_dataset_repo_txt_data_, hf_private_dataset_repo_zipped_img_data, hf_private_dataset_repo_zipped_img_data_, hf_private_dataset_repo_zipped_txt_data, hf_private_dataset_repo_zipped_txt_data_, hf_token, image_file, json_dict_of_lists_path, json_list_of_dicts_path, jsonl2_path, jsonl_312_path, jsonl_gz_path, jsonl_path, jsonl_str_path, lz4_file, mock_fsspec, mockfs, monkeypatch, parquet_path, pytestconfig, record_property, record_testsuite_property, record_xml_attribute, recwarn, set_ci_hub_access_token, set_sqlalchemy_silence_uber_warning, set_test_cache_config, set_update_download_counts_to_false, seven_zip_file, sqlite_path, tar_file, tar_jsonl_path, tar_nested_jsonl_path, temporary_repo, tensor_file, testrun_uid, text2_path, text_dir, text_dir_with_unsupported_extension, text_file, text_file_content, text_gz_path, text_path, text_path_with_unicode_new_lines, tmp_path, tmp_path_factory, tmpdir, tmpdir_factory, tmpfs, worker_id, xml_file, xz_file, zero_time_out_for_remote_code, zip_csv_path, zip_csv_with_dir_path, zip_file, zip_image_path, zip_jsonl_path, zip_jsonl_with_dir_path, zip_nested_jsonl_path, zip_text_path, zip_text_with_dir_path, zip_unsupported_ext_path, zip_uppercase_csv_path, zstd_file
> use 'pytest --fixtures [testpath]' for help on them.
/build/python-datasets/src/datasets-3.5.0/tests/features/test_pdf.py:8
_______________ ERROR at setup of test_dataset_with_pdf_feature ________________
[gw44] linux -- Python 3.13.2 /build/python-datasets/src/datasets-3.5.0/test-env/bin/python
file /build/python-datasets/src/datasets-3.5.0/tests/features/test_pdf.py, line 34
@require_pdfplumber
def test_dataset_with_pdf_feature(shared_datadir):
E fixture 'shared_datadir' not found
> available fixtures: _hf_gated_dataset_repo_txt_data, arrow_file, arrow_path, audio_file, bz2_csv_path, bz2_file, cache, capfd, capfdbinary, caplog, capsys, capsysbinary, ci_hfh_hf_hub_url, ci_hub_config, cleanup_repo, csv2_path, csv_path, data_dir_with_hidden_files, dataset, dataset_dict, disable_implicit_token, disable_tqdm_output, doctest_namespace, geoparquet_path, gz_file, hf_api, hf_gated_dataset_repo_txt_data, hf_private_dataset_repo_txt_data, hf_private_dataset_repo_txt_data_, hf_private_dataset_repo_zipped_img_data, hf_private_dataset_repo_zipped_img_data_, hf_private_dataset_repo_zipped_txt_data, hf_private_dataset_repo_zipped_txt_data_, hf_token, image_file, json_dict_of_lists_path, json_list_of_dicts_path, jsonl2_path, jsonl_312_path, jsonl_gz_path, jsonl_path, jsonl_str_path, lz4_file, mock_fsspec, mockfs, monkeypatch, parquet_path, pytestconfig, record_property, record_testsuite_property, record_xml_attribute, recwarn, set_ci_hub_access_token, set_sqlalchemy_silence_uber_warning, set_test_cache_config, set_update_download_counts_to_false, seven_zip_file, sqlite_path, tar_file, tar_jsonl_path, tar_nested_jsonl_path, temporary_repo, tensor_file, testrun_uid, text2_path, text_dir, text_dir_with_unsupported_extension, text_file, text_file_content, text_gz_path, text_path, text_path_with_unicode_new_lines, tmp_path, tmp_path_factory, tmpdir, tmpdir_factory, tmpfs, worker_id, xml_file, xz_file, zero_time_out_for_remote_code, zip_csv_path, zip_csv_with_dir_path, zip_file, zip_image_path, zip_jsonl_path, zip_jsonl_with_dir_path, zip_nested_jsonl_path, zip_text_path, zip_text_with_dir_path, zip_unsupported_ext_path, zip_uppercase_csv_path, zstd_file
> use 'pytest --fixtures [testpath]' for help on them.
/build/python-datasets/src/datasets-3.5.0/tests/features/test_pdf.py:34
_________ ERROR at setup of test_pdf_feature_encode_example[<lambda>0] _________
[gw46] linux -- Python 3.13.2 /build/python-datasets/src/datasets-3.5.0/test-env/bin/python
file /build/python-datasets/src/datasets-3.5.0/tests/features/test_pdf.py, line 8
@require_pdfplumber
@pytest.mark.parametrize(
"build_example",
[
lambda pdf_path: pdf_path,
lambda pdf_path: open(pdf_path, "rb").read(),
lambda pdf_path: {"path": pdf_path},
lambda pdf_path: {"path": pdf_path, "bytes": None},
lambda pdf_path: {"path": pdf_path, "bytes": open(pdf_path, "rb").read()},
lambda pdf_path: {"path": None, "bytes": open(pdf_path, "rb").read()},
lambda pdf_path: {"bytes": open(pdf_path, "rb").read()},
],
)
def test_pdf_feature_encode_example(shared_datadir, build_example):
E fixture 'shared_datadir' not found
> available fixtures: _hf_gated_dataset_repo_txt_data, arrow_file, arrow_path, audio_file, bz2_csv_path, bz2_file, cache, capfd, capfdbinary, caplog, capsys, capsysbinary, ci_hfh_hf_hub_url, ci_hub_config, cleanup_repo, csv2_path, csv_path, data_dir_with_hidden_files, dataset, dataset_dict, disable_implicit_token, disable_tqdm_output, doctest_namespace, geoparquet_path, gz_file, hf_api, hf_gated_dataset_repo_txt_data, hf_private_dataset_repo_txt_data, hf_private_dataset_repo_txt_data_, hf_private_dataset_repo_zipped_img_data, hf_private_dataset_repo_zipped_img_data_, hf_private_dataset_repo_zipped_txt_data, hf_private_dataset_repo_zipped_txt_data_, hf_token, image_file, json_dict_of_lists_path, json_list_of_dicts_path, jsonl2_path, jsonl_312_path, jsonl_gz_path, jsonl_path, jsonl_str_path, lz4_file, mock_fsspec, mockfs, monkeypatch, parquet_path, pytestconfig, record_property, record_testsuite_property, record_xml_attribute, recwarn, set_ci_hub_access_token, set_sqlalchemy_silence_uber_warning, set_test_cache_config, set_update_download_counts_to_false, seven_zip_file, sqlite_path, tar_file, tar_jsonl_path, tar_nested_jsonl_path, temporary_repo, tensor_file, testrun_uid, text2_path, text_dir, text_dir_with_unsupported_extension, text_file, text_file_content, text_gz_path, text_path, text_path_with_unicode_new_lines, tmp_path, tmp_path_factory, tmpdir, tmpdir_factory, tmpfs, worker_id, xml_file, xz_file, zero_time_out_for_remote_code, zip_csv_path, zip_csv_with_dir_path, zip_file, zip_image_path, zip_jsonl_path, zip_jsonl_with_dir_path, zip_nested_jsonl_path, zip_text_path, zip_text_with_dir_path, zip_unsupported_ext_path, zip_uppercase_csv_path, zstd_file
> use 'pytest --fixtures [testpath]' for help on them.
/build/python-datasets/src/datasets-3.5.0/tests/features/test_pdf.py:8
```
### Expected behavior
All fixtures used in tests should be available.
### Environment info
Arch Linux build system, building the [python-datasets](https://gitlab.archlinux.org/archlinux/packaging/packages/python-datasets) package.
There are actually [many deselected tests](https://gitlab.archlinux.org/archlinux/packaging/packages/python-datasets/-/blob/6f97957f0c326cc7b3da6b7f12326305bcaef374/PKGBUILD#L66-148) which were failing on previous releases, but these errors popped up in 3.5.0.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/1289205?v=4",
"events_url": "https://api.github.com/users/lahwaacz/events{/privacy}",
"followers_url": "https://api.github.com/users/lahwaacz/followers",
"following_url": "https://api.github.com/users/lahwaacz/following{/other_user}",
"gists_url": "https://api.github.com/users/lahwaacz/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lahwaacz",
"id": 1289205,
"login": "lahwaacz",
"node_id": "MDQ6VXNlcjEyODkyMDU=",
"organizations_url": "https://api.github.com/users/lahwaacz/orgs",
"received_events_url": "https://api.github.com/users/lahwaacz/received_events",
"repos_url": "https://api.github.com/users/lahwaacz/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lahwaacz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lahwaacz/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lahwaacz",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7486/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7486/timeline
| null |
completed
| null | null | 1.532778 | 217 |
End of preview. Expand
in Data Studio
- Downloads last month
- 4