url
stringlengths 58
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 72
75
| comments_url
stringlengths 67
70
| events_url
stringlengths 65
68
| html_url
stringlengths 46
51
| id
int64 599M
3.14B
| node_id
stringlengths 18
32
| number
int64 1
7.61k
| title
stringlengths 1
290
| user
dict | labels
listlengths 0
4
| state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
listlengths 0
4
| milestone
dict | comments
sequencelengths 0
30
| created_at
timestamp[ns, tz=UTC]date 2020-04-14 10:18:02
2025-06-13 09:02:24
| updated_at
timestamp[ns, tz=UTC]date 2020-04-27 16:04:17
2025-06-13 10:38:04
| closed_at
timestamp[ns, tz=UTC]date 2020-04-14 12:01:40
2025-06-13 00:44:27
⌀ | author_association
stringclasses 4
values | type
float64 | active_lock_reason
float64 | draft
float64 0
1
⌀ | pull_request
dict | body
stringlengths 0
228k
⌀ | closed_by
dict | reactions
dict | timeline_url
stringlengths 67
70
| performed_via_github_app
float64 | state_reason
stringclasses 4
values | sub_issues_summary
dict |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/7613 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7613/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7613/comments | https://api.github.com/repos/huggingface/datasets/issues/7613/events | https://github.com/huggingface/datasets/pull/7613 | 3,142,819,991 | PR_kwDODunzps6aWgr3 | 7,613 | fix parallel push_to_hub in dataset_dict | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7613). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-06-13T09:02:24 | 2025-06-13T10:38:04 | null | MEMBER | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/7613.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7613",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/7613.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7613"
} | null | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7613/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7613/timeline | null | null | null |
https://api.github.com/repos/huggingface/datasets/issues/7612 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7612/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7612/comments | https://api.github.com/repos/huggingface/datasets/issues/7612/events | https://github.com/huggingface/datasets/issues/7612 | 3,141,905,049 | I_kwDODunzps67RaqZ | 7,612 | Provide an option of robust dataset iterator with error handling | {
"avatar_url": "https://avatars.githubusercontent.com/u/40016222?v=4",
"events_url": "https://api.github.com/users/wwwjn/events{/privacy}",
"followers_url": "https://api.github.com/users/wwwjn/followers",
"following_url": "https://api.github.com/users/wwwjn/following{/other_user}",
"gists_url": "https://api.github.com/users/wwwjn/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/wwwjn",
"id": 40016222,
"login": "wwwjn",
"node_id": "MDQ6VXNlcjQwMDE2MjIy",
"organizations_url": "https://api.github.com/users/wwwjn/orgs",
"received_events_url": "https://api.github.com/users/wwwjn/received_events",
"repos_url": "https://api.github.com/users/wwwjn/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/wwwjn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wwwjn/subscriptions",
"type": "User",
"url": "https://api.github.com/users/wwwjn",
"user_view_type": "public"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | null | [] | 2025-06-13T00:40:48 | 2025-06-13T00:42:00 | null | NONE | null | null | null | null | ### Feature request
Adding an option to skip corrupted data samples when __iter__ is called. Currently the datasets behavior is throwing errors if the data sample if corrupted and let user aware and handle the data corruption. When I tried to try-catch the error at user level, the iterator will raise StopIteration when I called next() again.
The way I try to do error handling is: (This doesn't work, unfortunately)
```
# Load the dataset with streaming enabled
dataset = load_dataset(
"pixparse/cc12m-wds", split="train", streaming=True
)
# Get an iterator from the dataset
iterator = iter(dataset)
while True:
try:
# Try to get the next example
example = next(iterator)
# Try to access and process the image
image = example["jpg"]
pil_image = Image.fromarray(np.array(image))
pil_image.verify() # Verify it's a valid image file
except StopIteration: # Code path 1
print("\nStopIteration was raised! Reach the end of dataset")
raise StopIteration
except Exception as e: # Code path 2
errors += 1
print("Error! Skip this sample")
cotinue
else:
successful += 1
```
This is because the `IterableDataset` already throws an error (reaches Code path 2). And if I continue call next(), it will hit Code path 1. This is because the inner iterator of `IterableDataset`([code](https://github.com/huggingface/datasets/blob/89bd1f971402acb62805ef110bc1059c38b1c8c6/src/datasets/iterable_dataset.py#L2242)) as been stopped, so calling next() on it will raise StopIteration.
So I can not skip the corrupted data sample in this way. Would also love to hear any suggestions about creating a robust dataloader.
Thanks for your help in advance!
### Motivation
## Public dataset corruption might be common
A lot of users would use public dataset, and the public dataset might contains some corrupted data, especially for dataset with image / video etc. I totally understand it's dataset owner and user's responsibility to ensure the data integrity / run data cleaning or preprocessing, but it would be easier for developers who would use the dataset
## Use cases
For example, a robust dataloader would be easy for users who want to try quick tests on different dataset, and chose one dataset which fits their needs. So user could use IterableDataloader with `stream=True` to use the dataset easily without downloading and removing corrupted data samples from the dataset.
### Your contribution
The error handling might not trivial and might need more careful design. | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7612/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7612/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} |
https://api.github.com/repos/huggingface/datasets/issues/7611 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7611/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7611/comments | https://api.github.com/repos/huggingface/datasets/issues/7611/events | https://github.com/huggingface/datasets/issues/7611 | 3,141,383,940 | I_kwDODunzps67PbcE | 7,611 | Code example for dataset.add_column() does not reflect correct way to use function | {
"avatar_url": "https://avatars.githubusercontent.com/u/31388649?v=4",
"events_url": "https://api.github.com/users/shaily99/events{/privacy}",
"followers_url": "https://api.github.com/users/shaily99/followers",
"following_url": "https://api.github.com/users/shaily99/following{/other_user}",
"gists_url": "https://api.github.com/users/shaily99/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/shaily99",
"id": 31388649,
"login": "shaily99",
"node_id": "MDQ6VXNlcjMxMzg4NjQ5",
"organizations_url": "https://api.github.com/users/shaily99/orgs",
"received_events_url": "https://api.github.com/users/shaily99/received_events",
"repos_url": "https://api.github.com/users/shaily99/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/shaily99/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shaily99/subscriptions",
"type": "User",
"url": "https://api.github.com/users/shaily99",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [] | 2025-06-12T19:42:29 | 2025-06-12T19:42:29 | null | NONE | null | null | null | null | https://github.com/huggingface/datasets/blame/38d4d0e11e22fdbc4acf373d2421d25abeb43439/src/datasets/arrow_dataset.py#L5925C10-L5925C10
The example seems to suggest that dataset.add_column() can add column inplace, however, this is wrong -- it cannot. It returns a new dataset with the column added to it. | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7611/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7611/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} |
https://api.github.com/repos/huggingface/datasets/issues/7610 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7610/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7610/comments | https://api.github.com/repos/huggingface/datasets/issues/7610/events | https://github.com/huggingface/datasets/issues/7610 | 3,141,281,560 | I_kwDODunzps67PCcY | 7,610 | i cant confirm email | {
"avatar_url": "https://avatars.githubusercontent.com/u/187984415?v=4",
"events_url": "https://api.github.com/users/lykamspam/events{/privacy}",
"followers_url": "https://api.github.com/users/lykamspam/followers",
"following_url": "https://api.github.com/users/lykamspam/following{/other_user}",
"gists_url": "https://api.github.com/users/lykamspam/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lykamspam",
"id": 187984415,
"login": "lykamspam",
"node_id": "U_kgDOCzRqHw",
"organizations_url": "https://api.github.com/users/lykamspam/orgs",
"received_events_url": "https://api.github.com/users/lykamspam/received_events",
"repos_url": "https://api.github.com/users/lykamspam/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lykamspam/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lykamspam/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lykamspam",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [] | 2025-06-12T18:58:49 | 2025-06-12T18:58:49 | null | NONE | null | null | null | null | ### Describe the bug
This is dificult, I cant confirm email because I'm not get any email!
I cant post forum because I cant confirm email!
I can send help desk because... no exist on web page.
paragraph 44
### Steps to reproduce the bug
rthjrtrt
### Expected behavior
ewtgfwetgf
### Environment info
sdgfswdegfwe | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7610/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7610/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} |
https://api.github.com/repos/huggingface/datasets/issues/7609 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7609/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7609/comments | https://api.github.com/repos/huggingface/datasets/issues/7609/events | https://github.com/huggingface/datasets/pull/7609 | 3,140,373,128 | PR_kwDODunzps6aOQ_g | 7,609 | Update `_dill.py` to use `co_linetable` for Python 3.10+ in place of `co_lnotab` | {
"avatar_url": "https://avatars.githubusercontent.com/u/45557362?v=4",
"events_url": "https://api.github.com/users/qgallouedec/events{/privacy}",
"followers_url": "https://api.github.com/users/qgallouedec/followers",
"following_url": "https://api.github.com/users/qgallouedec/following{/other_user}",
"gists_url": "https://api.github.com/users/qgallouedec/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/qgallouedec",
"id": 45557362,
"login": "qgallouedec",
"node_id": "MDQ6VXNlcjQ1NTU3MzYy",
"organizations_url": "https://api.github.com/users/qgallouedec/orgs",
"received_events_url": "https://api.github.com/users/qgallouedec/received_events",
"repos_url": "https://api.github.com/users/qgallouedec/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/qgallouedec/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/qgallouedec/subscriptions",
"type": "User",
"url": "https://api.github.com/users/qgallouedec",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7609). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"not 100% sure either, I tried removing unnecessary checks - let me know if they sound good to you otherwise I'll revert",
"I can't reproduce the warning anymore... 🤦🏻♂️\r\n"
] | 2025-06-12T13:47:01 | 2025-06-12T14:56:11 | null | MEMBER | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/7609.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7609",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/7609.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7609"
} | Not 100% about this one, but it seems to be recommended.
```
/fsx/qgallouedec/miniconda3/envs/trl/lib/python3.12/site-packages/datasets/utils/_dill.py:385: DeprecationWarning: co_lnotab is deprecated, use co_lines instead.
```
Tests pass locally. And the warning is gone with this change.
https://peps.python.org/pep-0626/#backwards-compatibility | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7609/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7609/timeline | null | null | null |
https://api.github.com/repos/huggingface/datasets/issues/7608 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7608/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7608/comments | https://api.github.com/repos/huggingface/datasets/issues/7608/events | https://github.com/huggingface/datasets/pull/7608 | 3,137,564,259 | PR_kwDODunzps6aEr6b | 7,608 | Tests typing and fixes for push_to_hub | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7608). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-06-11T17:13:52 | 2025-06-12T21:15:23 | 2025-06-12T21:15:21 | MEMBER | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/7608.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7608",
"merged_at": "2025-06-12T21:15:21Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7608.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7608"
} | todo:
- [x] fix TestPushToHub.test_push_dataset_dict_to_hub_iterable_num_proc | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7608/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7608/timeline | null | null | null |
https://api.github.com/repos/huggingface/datasets/issues/7607 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7607/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7607/comments | https://api.github.com/repos/huggingface/datasets/issues/7607/events | https://github.com/huggingface/datasets/issues/7607 | 3,135,722,560 | I_kwDODunzps6651RA | 7,607 | Video and audio decoding with torchcodec | {
"avatar_url": "https://avatars.githubusercontent.com/u/49127578?v=4",
"events_url": "https://api.github.com/users/TyTodd/events{/privacy}",
"followers_url": "https://api.github.com/users/TyTodd/followers",
"following_url": "https://api.github.com/users/TyTodd/following{/other_user}",
"gists_url": "https://api.github.com/users/TyTodd/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/TyTodd",
"id": 49127578,
"login": "TyTodd",
"node_id": "MDQ6VXNlcjQ5MTI3NTc4",
"organizations_url": "https://api.github.com/users/TyTodd/orgs",
"received_events_url": "https://api.github.com/users/TyTodd/received_events",
"repos_url": "https://api.github.com/users/TyTodd/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/TyTodd/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TyTodd/subscriptions",
"type": "User",
"url": "https://api.github.com/users/TyTodd",
"user_view_type": "public"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | null | [
"Good idea ! let me know if you have any question or if I can help",
"@lhoestq Almost finished, but I'm having trouble understanding this test case.\nThis is how it looks originally. The `map` function is called, and then `with_format` is called. According to the test case example[\"video\"] is supposed to be a VideoReader. However, according to the [docs](https://huggingface.co/docs/datasets/package_reference/main_classes#datasets.Dataset.with_format) its supposed to be the type passed into `with_format` (numpy in this case). My implementation with VideoDecoder currently does the latter, is that correct, or should it be a VideoDecoder object instead?\n```\n@require_torchvision\ndef test_dataset_with_video_map_and_formatted(shared_datadir):\n from torchvision.io import VideoReader\n\n video_path = str(shared_datadir / \"test_video_66x50.mov\")\n data = {\"video\": [video_path]}\n features = Features({\"video\": Video()})\n dset = Dataset.from_dict(data, features=features)\n dset = dset.map(lambda x: x).with_format(\"numpy\")\n example = dset[0]\n assert isinstance(example[\"video\"], VideoReader)\n # assert isinstance(example[\"video\"][0], np.ndarray)\n\n # from bytes\n with open(video_path, \"rb\") as f:\n data = {\"video\": [f.read()]}\n dset = Dataset.from_dict(data, features=features)\n dset = dset.map(lambda x: x).with_format(\"numpy\")\n example = dset[0]\n assert isinstance(example[\"video\"], VideoReader)\n # assert isinstance(example[\"video\"][0], np.ndarray)\n\n```",
"Hi ! It's maybe more convenient for users to always have a VideoDecoder, since they might only access a few frames and not the full video. So IMO it's fine to always return a VideoDecoder (maybe later we can extend the VideoDecoder to return other types of tensors than numpy arrays though ? 👀 it's not crucial for now though)",
"@lhoestq ya that makes sense, looks like this functionality lives in `src/datasets/formatting`, where an exception is made for VideoReader objects to remain as themselves when being formatted. I'll make the necessary changes. ",
"@lhoestq I'm assuming this was also the case for torchaudio objects?",
"We're not using torchaudio but soundfile. But anyway we unfortunately decode full audio files instead of returning a Reader and it can be interesting to fix this. Currently it always returns a dict {\"array\": np.array(...), \"sampling_rate\": int(...)}, while it would be cool to return a reader with seek() and read() - like methods as for videos.\n\n(there is a way to make the audio change backward compatible anyway by allowing `reader[\"array\"]` to return the full array)",
"@lhoestq (sorry for the spam btw)\nLooks like there's a # TODO to have these returned as np.arrays instead. I'm curious why the authors didn't do it initially. Maybe a performance thing?\nThis is from `/src/datasets/formatting/np_formatter.py` line 70\n```\nif config.TORCHVISION_AVAILABLE and \"torchvision\" in sys.modules:\n from torchvision.io import VideoReader\n\n if isinstance(value, VideoReader):\n return value # TODO(QL): set output to np arrays ?\n```",
"Oh cool ya this is something that I could implement with torchcodec. I can add that to the PR as well.",
"> Looks like there's a # TODO to have these returned as np.arrays instead. I'm curious why the authors didn't do it initially. Maybe a performance thing?\n\nyea that was me, I focused on a simple logic to start with, since I knew there was torchcodec coming and maybe wasn't worth it at the time ^^\n\nbut anyway it's fine to start with a logic without formatting to start with and then iterate",
"Hey @lhoestq I ran into an error with this test case for the Audio feature\n\n```\n@require_sndfile\n@require_torchcodec\ndef test_dataset_with_audio_feature_map_is_decoded(shared_datadir):\n audio_path = str(shared_datadir / \"test_audio_44100.wav\")\n data = {\"audio\": [audio_path], \"text\": [\"Hello\"]}\n features = Features({\"audio\": Audio(), \"text\": Value(\"string\")})\n dset = Dataset.from_dict(data, features=features)\n\n def process_audio_sampling_rate_by_example(example):\n sample_rate = example[\"audio\"].get_all_samples().sample_rate\n example[\"double_sampling_rate\"] = 2 * sample_rate\n return example\n\n decoded_dset = dset.map(process_audio_sampling_rate_by_example)\n for item in decoded_dset.cast_column(\"audio\", Audio(decode=False)):\n assert item.keys() == {\"audio\", \"text\", \"double_sampling_rate\"}\n assert item[\"double_sampling_rate\"] == 88200\n\n def process_audio_sampling_rate_by_batch(batch):\n double_sampling_rates = []\n for audio in batch[\"audio\"]:\n double_sampling_rates.append(2 * audio.get_all_samples().sample_rate)\n batch[\"double_sampling_rate\"] = double_sampling_rates\n return batch\n\n decoded_dset = dset.map(process_audio_sampling_rate_by_batch, batched=True)\n for item in decoded_dset.cast_column(\"audio\", Audio(decode=False)):\n assert item.keys() == {\"audio\", \"text\", \"double_sampling_rate\"}\n assert item[\"double_sampling_rate\"] == 88200\n```\n\nthis is the error below\n```\nsrc/datasets/arrow_writer.py:626: in write_batch\n arrays.append(pa.array(typed_sequence))\n.....\nFAILED tests/features/test_audio.py::test_dataset_with_audio_feature_map_is_decoded - pyarrow.lib.ArrowInvalid: Could not convert <torchcodec.decoders._audio_decoder.AudioDecoder object at 0x138cdd810> with type AudioDecoder: did not recognize Python value type when inferring an Arrow data type\n```\n\nBy the way I copied the test case and ran it on the original implementation of the Video feature, which uses the torchvision backend and I got a similar error.\n```\ndef test_dataset_with_video_feature_map_is_decoded(shared_datadir):\n video_path = str(shared_datadir / \"test_video_66x50.mov\")\n data = {\"video\": [video_path], \"text\": [\"Hello\"]}\n features = Features({\"video\": Video(), \"text\": Value(\"string\")})\n dset = Dataset.from_dict(data, features=features)\n\n def process_audio_sampling_rate_by_example(example):\n metadata = example[\"video\"].get_metadata()\n example[\"double_fps\"] = 2 * metadata[\"video\"][\"fps\"][0]\n return example\n\n decoded_dset = dset.map(process_audio_sampling_rate_by_example)\n for item in decoded_dset.cast_column(\"video\", Video(decode=False)):\n assert item.keys() == {\"video\", \"text\", \"double_fps\"}\n assert item[\"double_fps\"] == 2 * 10 # prollly wont work past 2*10 is made up!! shouldn't pass\n\n def process_audio_sampling_rate_by_batch(batch):\n double_fps = []\n for video in batch[\"video\"]:\n double_fps.append(2 * video.metadata.begin_stream_seconds)\n batch[\"double_fps\"] = double_fps\n return batch\n\n decoded_dset = dset.map(process_audio_sampling_rate_by_batch, batched=True)\n for item in decoded_dset.cast_column(\"video\", Video(decode=False)):\n assert item.keys() == {\"video\", \"text\", \"double_fps\"}\n assert item[\"double_fps\"] == 2 * 10 # prollly wont work past this no reason it should\n```\n\nI was wondering if these error's are expected. They seem to be coming from the fact that the function `_cast_to_python_objects` in `src/datasets/features/features.py` doesn't handle VideoDecoders or AudioDecoders. I was able to fix it and get rid of the error by adding this to the bottom of the function\n```\n elif config.TORCHCODEC_AVAILABLE and \"torchcodec\" in sys.modules and isinstance(obj, VideoDecoder):\n v = Video()\n return v.encode_example(obj), True\n elif config.TORCHCODEC_AVAILABLE and \"torchcodec\" in sys.modules and isinstance(obj, AudioDecoder):\n a = Audio()\n return a.encode_example(obj), True\n```\nThis fixed it, but I just want to make sure I'm not adding things that are messing up the intended functionality.",
"This is the right fix ! :)",
"Btw I just remembered that we were using soundfile because it can support a wide range of audio formats, is it also the case for torchcodec ? including ogg, opus for example",
"Yes from what I understand torchcodec supports everything ffmpeg supports.",
"Okay just finished. However, I wasn't able to pass this test case:\n```python\n@require_torchcodec\n@require_sndfile\[email protected](\"streaming\", [False, True])\ndef test_load_dataset_with_audio_feature(streaming, jsonl_audio_dataset_path, shared_datadir):\n from torchcodec.decoders import AudioDecoder\n audio_path = str(shared_datadir / \"test_audio_44100.wav\")\n data_files = jsonl_audio_dataset_path\n features = Features({\"audio\": Audio(), \"text\": Value(\"string\")})\n dset = load_dataset(\"json\", split=\"train\", data_files=data_files, features=features, streaming=streaming)\n item = dset[0] if not streaming else next(iter(dset))\n assert item.keys() == {\"audio\", \"text\"}\n assert isinstance(item[\"audio\"], AudioDecoder)\n samples = item[\"audio\"].get_all_samples()\n assert samples.sample_rate == 44100\n assert samples.data.shape == (1, 202311)\n```\n\nIt returned this error\n```\nstreaming = False, jsonl_audio_dataset_path = '/private/var/folders/47/c7dlgs_n6lx8rtr8f5w5m1m00000gn/T/pytest-of-tytodd/pytest-103/data2/audio_dataset.jsonl'\nshared_datadir = PosixPath('/private/var/folders/47/c7dlgs_n6lx8rtr8f5w5m1m00000gn/T/pytest-of-tytodd/pytest-103/test_load_dataset_with_audio_f0/data')\n\n @require_torchcodec\n @require_sndfile\n @pytest.mark.parametrize(\"streaming\", [False, True])\n def test_load_dataset_with_audio_feature(streaming, jsonl_audio_dataset_path, shared_datadir):\n from torchcodec.decoders import AudioDecoder\n audio_path = str(shared_datadir / \"test_audio_44100.wav\")\n data_files = jsonl_audio_dataset_path\n features = Features({\"audio\": Audio(), \"text\": Value(\"string\")})\n> dset = load_dataset(\"json\", split=\"train\", data_files=data_files, features=features, streaming=streaming)\n\ntests/features/test_audio.py:686: \n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _\nsrc/datasets/load.py:1418: in load_dataset\n builder_instance.download_and_prepare(\nsrc/datasets/builder.py:925: in download_and_prepare\n self._download_and_prepare(\nsrc/datasets/builder.py:1019: in _download_and_prepare\n verify_splits(self.info.splits, split_dict)\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _\n\nexpected_splits = {'train': SplitInfo(name='train', num_bytes=2351563, num_examples=10000, shard_lengths=None, dataset_name=None), 'validation': SplitInfo(name='validation', num_bytes=238418, num_examples=1000, shard_lengths=None, dataset_name=None)}\nrecorded_splits = {'train': SplitInfo(name='train', num_bytes=167, num_examples=1, shard_lengths=None, dataset_name='json')}\n\n def verify_splits(expected_splits: Optional[dict], recorded_splits: dict):\n if expected_splits is None:\n logger.info(\"Unable to verify splits sizes.\")\n return\n if len(set(expected_splits) - set(recorded_splits)) > 0:\n> raise ExpectedMoreSplitsError(str(set(expected_splits) - set(recorded_splits)))\nE datasets.exceptions.ExpectedMoreSplitsError: {'validation'}\n\nsrc/datasets/utils/info_utils.py:68: ExpectedMoreSplitsError\n```\n\nIt looks like this test case wasn't passing when I forked the repo, so I assume I didn't do anything to break it. I also added this case to `test_video.py`, and it fails there as well. If this looks good, I'll go ahead and submit the PR."
] | 2025-06-11T07:02:30 | 2025-06-13T08:43:05 | null | NONE | null | null | null | null | ### Feature request
Pytorch is migrating video processing to torchcodec and it's pretty cool. It would be nice to migrate both the audio and video features to use torchcodec instead of torchaudio/video.
### Motivation
My use case is I'm working on a multimodal AV model, and what's nice about torchcodec is I can extract the audio tensors directly from MP4 files. Also, I can easily resample video data to whatever fps I like on the fly. I haven't found an easy/efficient way to do this with torchvision.
### Your contribution
I’m modifying the Video dataclass to use torchcodec in place of the current backend, starting from a stable commit for a project I’m working on. If it ends up working well, I’m happy to open a PR on main. | null | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7607/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7607/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} |
https://api.github.com/repos/huggingface/datasets/issues/7606 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7606/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7606/comments | https://api.github.com/repos/huggingface/datasets/issues/7606/events | https://github.com/huggingface/datasets/pull/7606 | 3,133,848,546 | PR_kwDODunzps6Z3_kV | 7,606 | Add `num_proc=` to `.push_to_hub()` (Dataset and IterableDataset) | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7606). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-06-10T14:35:10 | 2025-06-11T16:47:28 | 2025-06-11T16:47:25 | MEMBER | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/7606.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7606",
"merged_at": "2025-06-11T16:47:25Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7606.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7606"
} | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 6,
"laugh": 0,
"rocket": 0,
"total_count": 6,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7606/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7606/timeline | null | null | null |
https://api.github.com/repos/huggingface/datasets/issues/7605 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7605/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7605/comments | https://api.github.com/repos/huggingface/datasets/issues/7605/events | https://github.com/huggingface/datasets/pull/7605 | 3,131,636,882 | PR_kwDODunzps6ZwcPp | 7,605 | Make `push_to_hub` atomic (#7600) | {
"avatar_url": "https://avatars.githubusercontent.com/u/391004?v=4",
"events_url": "https://api.github.com/users/sharvil/events{/privacy}",
"followers_url": "https://api.github.com/users/sharvil/followers",
"following_url": "https://api.github.com/users/sharvil/following{/other_user}",
"gists_url": "https://api.github.com/users/sharvil/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sharvil",
"id": 391004,
"login": "sharvil",
"node_id": "MDQ6VXNlcjM5MTAwNA==",
"organizations_url": "https://api.github.com/users/sharvil/orgs",
"received_events_url": "https://api.github.com/users/sharvil/received_events",
"repos_url": "https://api.github.com/users/sharvil/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sharvil/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sharvil/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sharvil",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7605). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"Hi ! unfortunately we can't allow atomic commits for commits with hundreds of files additions (HF would time out)\r\n\r\nMaybe an alternative would be to retry if there was a commit in between ? this could be the default behavior as well",
"Thanks for taking a look – much appreciated!\r\n\r\nI've verified that commits with up to 20,000 files don't time out and the commit time scales linearly with the number of operations enqueued. It took just under 2 minutes to complete (successfully) the 20k file commit.\r\n\r\nThe fundamental issue I'm trying to tackle here is dataset corruption: getting into a state where a dataset on the hub cannot be used when downloaded. Non-atomic commits won't get us there, I think. If, for example, 3 of 5 commits complete and the machine/process calling `push_to_hub` has a network, hardware, or other failure that prevents it from completing the rest of the commits (even with retries) we'll now have some pointer files pointing to the new data and others pointing to the old data => corrupted. While this may seem like an unlikely scenario, it's a regular occurrence at scale.\r\n\r\nIf you still feel strongly that atomic commits are not the right way to go, I can either set it to not be the default or remove it entirely from this PR.\r\n\r\nAs for retries, it's a good idea. In a non-atomic world, the logic gets more complicated:\r\n- keep an explicit queue of pending add/delete operations\r\n- chunkwise pop from queue and commit with `parent_commit` set to previous chunked commit hash\r\n- if `create_commit` fails:\r\n - re-fetch README and set `parent_commit` to latest hash for `revision`\r\n - re-generate dataset card content\r\n - swap old `CommitOperationAdd` with new one for README in the pending queue\r\n- resume chunkwise committing from the queue as above\r\n\r\nEntirely doable, but more involved than I signed up for with this PR.",
"Just to clarify – setting the `parent_commit` can be separated from making the commit atomic (which is what I'm suggesting by either atomic commits not the default or removing it from this PR). It's crucial to set the parent commit to avoid the read-modify-write race condition on the README schema."
] | 2025-06-09T22:29:38 | 2025-06-11T15:56:58 | null | NONE | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/7605.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7605",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/7605.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7605"
} | null | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7605/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7605/timeline | null | null | null |
https://api.github.com/repos/huggingface/datasets/issues/7604 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7604/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7604/comments | https://api.github.com/repos/huggingface/datasets/issues/7604/events | https://github.com/huggingface/datasets/pull/7604 | 3,130,837,169 | PR_kwDODunzps6Ztrm_ | 7,604 | Docs and more methods for IterableDataset: push_to_hub, to_parquet... | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7604). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-06-09T16:44:40 | 2025-06-10T13:15:23 | 2025-06-10T13:15:21 | MEMBER | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/7604.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7604",
"merged_at": "2025-06-10T13:15:21Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7604.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7604"
} | to_csv, to_json, to_sql, to_pandas, to_polars, to_dict, to_list | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7604/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7604/timeline | null | null | null |
https://api.github.com/repos/huggingface/datasets/issues/7603 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7603/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7603/comments | https://api.github.com/repos/huggingface/datasets/issues/7603/events | https://github.com/huggingface/datasets/pull/7603 | 3,130,394,563 | PR_kwDODunzps6ZsKin | 7,603 | No TF in win tests | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7603). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-06-09T13:56:34 | 2025-06-09T15:33:31 | 2025-06-09T15:33:30 | MEMBER | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/7603.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7603",
"merged_at": "2025-06-09T15:33:30Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7603.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7603"
} | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7603/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7603/timeline | null | null | null |
https://api.github.com/repos/huggingface/datasets/issues/7602 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7602/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7602/comments | https://api.github.com/repos/huggingface/datasets/issues/7602/events | https://github.com/huggingface/datasets/pull/7602 | 3,128,758,924 | PR_kwDODunzps6Zmk99 | 7,602 | Enhance error handling and input validation across multiple modules | {
"avatar_url": "https://avatars.githubusercontent.com/u/147746955?v=4",
"events_url": "https://api.github.com/users/mohiuddin-khan-shiam/events{/privacy}",
"followers_url": "https://api.github.com/users/mohiuddin-khan-shiam/followers",
"following_url": "https://api.github.com/users/mohiuddin-khan-shiam/following{/other_user}",
"gists_url": "https://api.github.com/users/mohiuddin-khan-shiam/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mohiuddin-khan-shiam",
"id": 147746955,
"login": "mohiuddin-khan-shiam",
"node_id": "U_kgDOCM5wiw",
"organizations_url": "https://api.github.com/users/mohiuddin-khan-shiam/orgs",
"received_events_url": "https://api.github.com/users/mohiuddin-khan-shiam/received_events",
"repos_url": "https://api.github.com/users/mohiuddin-khan-shiam/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mohiuddin-khan-shiam/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mohiuddin-khan-shiam/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mohiuddin-khan-shiam",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [] | 2025-06-08T23:01:06 | 2025-06-08T23:01:06 | null | NONE | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/7602.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7602",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/7602.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7602"
} | This PR improves the robustness and user experience by:
1. **Audio Module**:
- Added clear error messages when required fields ('path' or 'bytes') are missing in audio encoding
2. **DatasetDict**:
- Enhanced key access error messages to show available splits when an invalid key is accessed
3. **NonMutableDict**:
- Added input validation for the update() method to ensure proper mapping types
4. **Arrow Reader**:
- Improved error messages for small dataset percentage splits with suggestions for alternatives
5. **FaissIndex**:
- Strengthened input validation with descriptive error messages
- Added proper type checking and shape validation for search queries
These changes make the code more maintainable and user-friendly by providing actionable feedback when issues arise. | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7602/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7602/timeline | null | null | null |
https://api.github.com/repos/huggingface/datasets/issues/7600 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7600/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7600/comments | https://api.github.com/repos/huggingface/datasets/issues/7600/events | https://github.com/huggingface/datasets/issues/7600 | 3,127,296,182 | I_kwDODunzps66ZsC2 | 7,600 | `push_to_hub` is not concurrency safe (dataset schema corruption) | {
"avatar_url": "https://avatars.githubusercontent.com/u/391004?v=4",
"events_url": "https://api.github.com/users/sharvil/events{/privacy}",
"followers_url": "https://api.github.com/users/sharvil/followers",
"following_url": "https://api.github.com/users/sharvil/following{/other_user}",
"gists_url": "https://api.github.com/users/sharvil/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sharvil",
"id": 391004,
"login": "sharvil",
"node_id": "MDQ6VXNlcjM5MTAwNA==",
"organizations_url": "https://api.github.com/users/sharvil/orgs",
"received_events_url": "https://api.github.com/users/sharvil/received_events",
"repos_url": "https://api.github.com/users/sharvil/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sharvil/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sharvil/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sharvil",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [
"@lhoestq can you please take a look? I've submitted a PR that fixes this issue. Thanks.",
"Thanks for the ping ! As I said in https://github.com/huggingface/datasets/pull/7605 there is maybe a more general approach using retries :)"
] | 2025-06-07T17:28:56 | 2025-06-11T14:14:04 | null | NONE | null | null | null | null | ### Describe the bug
Concurrent processes modifying and pushing a dataset can overwrite each others' dataset card, leaving the dataset unusable.
Consider this scenario:
- we have an Arrow dataset
- there are `N` configs of the dataset
- there are `N` independent processes operating on each of the individual configs (e.g. adding a column, `new_col`)
- each process calls `push_to_hub` on their particular config when they're done processing
- all calls to `push_to_hub` succeed
- the `README.md` now has some configs with `new_col` added and some with `new_col` missing
Any attempt to load a config (using `load_dataset`) where `new_col` is missing will fail because of a schema mismatch between `README.md` and the Arrow files. Fixing the dataset requires updating `README.md` by hand with the correct schema for the affected config. In effect, `push_to_hub` is doing a `git push --force` (I found this behavior quite surprising).
We have hit this issue every time we run processing jobs over our datasets and have to fix corrupted schemas by hand.
Reading through the code, it seems that specifying a [`parent_commit`](https://github.com/huggingface/huggingface_hub/blob/v0.32.4/src/huggingface_hub/hf_api.py#L4587) hash around here https://github.com/huggingface/datasets/blob/main/src/datasets/arrow_dataset.py#L5794 would get us to a normal, non-forced git push, and avoid schema corruption. I'm not familiar enough with the code to know how to determine the commit hash from which the in-memory dataset card was loaded.
### Steps to reproduce the bug
See above.
### Expected behavior
Concurrent edits to disjoint configs of a dataset should never corrupt the dataset schema.
### Environment info
- `datasets` version: 2.20.0
- Platform: Linux-5.15.0-118-generic-x86_64-with-glibc2.35
- Python version: 3.10.14
- `huggingface_hub` version: 0.30.2
- PyArrow version: 19.0.1
- Pandas version: 2.2.2
- `fsspec` version: 2023.9.0 | null | {
"+1": 5,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 5,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7600/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7600/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} |
https://api.github.com/repos/huggingface/datasets/issues/7599 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7599/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7599/comments | https://api.github.com/repos/huggingface/datasets/issues/7599/events | https://github.com/huggingface/datasets/issues/7599 | 3,125,620,119 | I_kwDODunzps66TS2X | 7,599 | My already working dataset (when uploaded few months ago) now is ignoring metadata.jsonl | {
"avatar_url": "https://avatars.githubusercontent.com/u/97530443?v=4",
"events_url": "https://api.github.com/users/JuanCarlosMartinezSevilla/events{/privacy}",
"followers_url": "https://api.github.com/users/JuanCarlosMartinezSevilla/followers",
"following_url": "https://api.github.com/users/JuanCarlosMartinezSevilla/following{/other_user}",
"gists_url": "https://api.github.com/users/JuanCarlosMartinezSevilla/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/JuanCarlosMartinezSevilla",
"id": 97530443,
"login": "JuanCarlosMartinezSevilla",
"node_id": "U_kgDOBdAySw",
"organizations_url": "https://api.github.com/users/JuanCarlosMartinezSevilla/orgs",
"received_events_url": "https://api.github.com/users/JuanCarlosMartinezSevilla/received_events",
"repos_url": "https://api.github.com/users/JuanCarlosMartinezSevilla/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/JuanCarlosMartinezSevilla/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JuanCarlosMartinezSevilla/subscriptions",
"type": "User",
"url": "https://api.github.com/users/JuanCarlosMartinezSevilla",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [
"Maybe its been a recent update, but i can manage to load the metadata.jsonl separately from the images with:\n\n```\nmetadata = load_dataset(\"PRAIG/SMB\", split=\"train\", data_files=[\"*.jsonl\"])\nimages = load_dataset(\"PRAIG/SMB\", split=\"train\")\n```\nDo you know it this is an expected behaviour? This makes my dataset viewer to only load the images without the labeling of metadata.jsonl.\n\nThanks",
"Hi ! this is because we now expect the metadata file to be inside the directory named after the split \"train\" (this way each split can have its own metadata and can be loaded independently)\n\nYou can fix that by configuring it explicitly in the dataset's README.md header:\n\n```yaml\nconfigs:\n- config_name: default\n data_files:\n - split: train\n path:\n - \"train/**/*.png\"\n - \"metadata.jsonl\"\n```\n\n(or by moving the metadata.jsonl in train/ but in this case you also have to modify the content of the JSONL to fix the relative paths to the images)"
] | 2025-06-06T18:59:00 | 2025-06-11T14:36:55 | null | NONE | null | null | null | null | ### Describe the bug
Hi everyone, I uploaded my dataset https://huggingface.co/datasets/PRAIG/SMB a few months ago while I was waiting for a conference acceptance response. Without modifying anything in the dataset repository now the Dataset viewer is not rendering the metadata.jsonl annotations, neither it is being downloaded when using load_dataset. Can you please help? Thank you in advance.
### Steps to reproduce the bug
from datasets import load_dataset
ds = load_dataset("PRAIG/SMB")
ds = ds["train"]
### Expected behavior
It is expected to have all the metadata available in the jsonl file. Fields like: "score_id", "original_width", "original_height", "regions"... among others.
### Environment info
datasets==3.6.0, python 3.13.3 (but he problem is already in the huggingface dataset page) | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7599/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7599/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} |
https://api.github.com/repos/huggingface/datasets/issues/7598 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7598/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7598/comments | https://api.github.com/repos/huggingface/datasets/issues/7598/events | https://github.com/huggingface/datasets/pull/7598 | 3,125,184,457 | PR_kwDODunzps6ZaclZ | 7,598 | fix string_to_dict usage for windows | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7598). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-06-06T15:54:29 | 2025-06-06T16:12:22 | 2025-06-06T16:12:21 | MEMBER | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/7598.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7598",
"merged_at": "2025-06-06T16:12:21Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7598.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7598"
} | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7598/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7598/timeline | null | null | null |
https://api.github.com/repos/huggingface/datasets/issues/7597 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7597/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7597/comments | https://api.github.com/repos/huggingface/datasets/issues/7597/events | https://github.com/huggingface/datasets/issues/7597 | 3,123,962,709 | I_kwDODunzps66M-NV | 7,597 | Download datasets from a private hub in 2025 | {
"avatar_url": "https://avatars.githubusercontent.com/u/178552926?v=4",
"events_url": "https://api.github.com/users/DanielSchuhmacher/events{/privacy}",
"followers_url": "https://api.github.com/users/DanielSchuhmacher/followers",
"following_url": "https://api.github.com/users/DanielSchuhmacher/following{/other_user}",
"gists_url": "https://api.github.com/users/DanielSchuhmacher/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/DanielSchuhmacher",
"id": 178552926,
"login": "DanielSchuhmacher",
"node_id": "U_kgDOCqSAXg",
"organizations_url": "https://api.github.com/users/DanielSchuhmacher/orgs",
"received_events_url": "https://api.github.com/users/DanielSchuhmacher/received_events",
"repos_url": "https://api.github.com/users/DanielSchuhmacher/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/DanielSchuhmacher/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DanielSchuhmacher/subscriptions",
"type": "User",
"url": "https://api.github.com/users/DanielSchuhmacher",
"user_view_type": "public"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | null | [
"Hi ! First, and in the general case, Hugging Face does offer to host private datasets, and with a subscription you can even choose the region in which the repositories are hosted (US, EU)\n\nThen if you happen to have a private deployment, you can set the HF_ENDPOINT environment variable (same as in https://github.com/huggingface/transformers/issues/38634)",
"Thank you @lhoestq. Works as described!"
] | 2025-06-06T07:55:19 | 2025-06-11T14:43:36 | null | NONE | null | null | null | null | ### Feature request
In the context of a private hub deployment, customers would like to use load_dataset() to load datasets from their hub, not from the public hub. This doesn't seem to be configurable at the moment and it would be nice to add this feature.
The obvious workaround is to clone the repo first and then load it from local storage, but this adds an extra step. It'd be great to have the same experience regardless of where the hub is hosted.
This issue was raised before here: https://github.com/huggingface/datasets/issues/3679
@juliensimon
### Motivation
none
### Your contribution
none | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7597/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7597/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} |
https://api.github.com/repos/huggingface/datasets/issues/7596 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7596/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7596/comments | https://api.github.com/repos/huggingface/datasets/issues/7596/events | https://github.com/huggingface/datasets/pull/7596 | 3,122,595,042 | PR_kwDODunzps6ZRkEU | 7,596 | Add albumentations to use dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/5481618?v=4",
"events_url": "https://api.github.com/users/ternaus/events{/privacy}",
"followers_url": "https://api.github.com/users/ternaus/followers",
"following_url": "https://api.github.com/users/ternaus/following{/other_user}",
"gists_url": "https://api.github.com/users/ternaus/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ternaus",
"id": 5481618,
"login": "ternaus",
"node_id": "MDQ6VXNlcjU0ODE2MTg=",
"organizations_url": "https://api.github.com/users/ternaus/orgs",
"received_events_url": "https://api.github.com/users/ternaus/received_events",
"repos_url": "https://api.github.com/users/ternaus/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ternaus/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ternaus/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ternaus",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [
"@lhoestq ping",
"@lhoestq ping"
] | 2025-06-05T20:39:46 | 2025-06-11T14:20:25 | null | CONTRIBUTOR | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/7596.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7596",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/7596.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7596"
} | 1. Fixed broken link to the list of transforms in torchvison.
2. Extended section about video image augmentations with an example from Albumentations. | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7596/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7596/timeline | null | null | null |
https://api.github.com/repos/huggingface/datasets/issues/7595 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7595/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7595/comments | https://api.github.com/repos/huggingface/datasets/issues/7595/events | https://github.com/huggingface/datasets/pull/7595 | 3,121,689,436 | PR_kwDODunzps6ZOaFl | 7,595 | Add `IterableDataset.push_to_hub()` | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7595). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-06-05T15:29:32 | 2025-06-06T16:12:37 | 2025-06-06T16:12:36 | MEMBER | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/7595.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7595",
"merged_at": "2025-06-06T16:12:36Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7595.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7595"
} | Basic implementation, which writes one shard per input dataset shard.
This is to be improved later.
Close https://github.com/huggingface/datasets/issues/5665
PS: for image/audio datasets structured as actual image/audio files (not parquet), you can sometimes speed it up with `ds.decode(num_threads=...).push_to_hub(...)` | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7595/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7595/timeline | null | null | null |
https://api.github.com/repos/huggingface/datasets/issues/7594 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7594/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7594/comments | https://api.github.com/repos/huggingface/datasets/issues/7594/events | https://github.com/huggingface/datasets/issues/7594 | 3,120,799,626 | I_kwDODunzps66A5-K | 7,594 | Add option to ignore keys/columns when loading a dataset from jsonl(or any other data format) | {
"avatar_url": "https://avatars.githubusercontent.com/u/36810152?v=4",
"events_url": "https://api.github.com/users/avishaiElmakies/events{/privacy}",
"followers_url": "https://api.github.com/users/avishaiElmakies/followers",
"following_url": "https://api.github.com/users/avishaiElmakies/following{/other_user}",
"gists_url": "https://api.github.com/users/avishaiElmakies/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/avishaiElmakies",
"id": 36810152,
"login": "avishaiElmakies",
"node_id": "MDQ6VXNlcjM2ODEwMTUy",
"organizations_url": "https://api.github.com/users/avishaiElmakies/orgs",
"received_events_url": "https://api.github.com/users/avishaiElmakies/received_events",
"repos_url": "https://api.github.com/users/avishaiElmakies/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/avishaiElmakies/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/avishaiElmakies/subscriptions",
"type": "User",
"url": "https://api.github.com/users/avishaiElmakies",
"user_view_type": "public"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | null | [
"Good point, I'd be in favor of having the `columns` argument in `JsonConfig` (and the others) to align with `ParquetConfig` to let users choose which columns to load and ignore the rest",
"Is it possible to ignore columns when using parquet? ",
"Yes, you can pass `columns=...` to load_dataset to select which columns to load, and it is passed to `ParquetConfig` :)",
"Ok, i didn't know that. \nAnyway, it would be good to add this to others"
] | 2025-06-05T11:12:45 | 2025-06-05T12:58:12 | null | NONE | null | null | null | null | ### Feature request
Hi, I would like the option to ignore keys/columns when loading a dataset from files (e.g. jsonl).
### Motivation
I am working on a dataset which is built on jsonl. It seems the dataset is unclean and a column has different types in each row. I can't clean this or remove the column (It is not my data and it is too big for me to clean and save on my own hardware).
I would like the option to just ignore this column when using `load_dataset`, since i don't need it.
I tried to look if this is already possible but couldn't find a solution. if there is I would love some help. If it is not currently possible, I would love this feature
### Your contribution
I don't think I can help this time, unfortunately. | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7594/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7594/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} |
https://api.github.com/repos/huggingface/datasets/issues/7593 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7593/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7593/comments | https://api.github.com/repos/huggingface/datasets/issues/7593/events | https://github.com/huggingface/datasets/pull/7593 | 3,118,812,368 | PR_kwDODunzps6ZE34G | 7,593 | Fix broken link to albumentations | {
"avatar_url": "https://avatars.githubusercontent.com/u/5481618?v=4",
"events_url": "https://api.github.com/users/ternaus/events{/privacy}",
"followers_url": "https://api.github.com/users/ternaus/followers",
"following_url": "https://api.github.com/users/ternaus/following{/other_user}",
"gists_url": "https://api.github.com/users/ternaus/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ternaus",
"id": 5481618,
"login": "ternaus",
"node_id": "MDQ6VXNlcjU0ODE2MTg=",
"organizations_url": "https://api.github.com/users/ternaus/orgs",
"received_events_url": "https://api.github.com/users/ternaus/received_events",
"repos_url": "https://api.github.com/users/ternaus/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ternaus/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ternaus/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ternaus",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7593). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"@lhoestq ping"
] | 2025-06-04T19:00:13 | 2025-06-05T16:37:02 | 2025-06-05T16:36:32 | CONTRIBUTOR | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/7593.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7593",
"merged_at": "2025-06-05T16:36:32Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7593.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7593"
} | A few months back I rewrote all docs at [https://albumentations.ai/docs](https://albumentations.ai/docs), and some pages changed their links.
In this PR fixed link to the most recent doc in Albumentations about bounding boxes and it's format.
Fix a few typos in the doc as well. | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7593/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7593/timeline | null | null | null |
https://api.github.com/repos/huggingface/datasets/issues/7592 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7592/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7592/comments | https://api.github.com/repos/huggingface/datasets/issues/7592/events | https://github.com/huggingface/datasets/pull/7592 | 3,118,203,880 | PR_kwDODunzps6ZC2so | 7,592 | Remove scripts altogether | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7592). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-06-04T15:14:11 | 2025-06-09T16:45:29 | 2025-06-09T16:45:27 | MEMBER | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/7592.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7592",
"merged_at": "2025-06-09T16:45:27Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7592.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7592"
} | TODO:
- [x] remplace fixtures based on script with no-script fixtures
- [x] windaube | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7592/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7592/timeline | null | null | null |
https://api.github.com/repos/huggingface/datasets/issues/7591 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7591/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7591/comments | https://api.github.com/repos/huggingface/datasets/issues/7591/events | https://github.com/huggingface/datasets/issues/7591 | 3,117,816,388 | I_kwDODunzps651hpE | 7,591 | Add num_proc parameter to push_to_hub | {
"avatar_url": "https://avatars.githubusercontent.com/u/46050679?v=4",
"events_url": "https://api.github.com/users/SwayStar123/events{/privacy}",
"followers_url": "https://api.github.com/users/SwayStar123/followers",
"following_url": "https://api.github.com/users/SwayStar123/following{/other_user}",
"gists_url": "https://api.github.com/users/SwayStar123/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/SwayStar123",
"id": 46050679,
"login": "SwayStar123",
"node_id": "MDQ6VXNlcjQ2MDUwNjc5",
"organizations_url": "https://api.github.com/users/SwayStar123/orgs",
"received_events_url": "https://api.github.com/users/SwayStar123/received_events",
"repos_url": "https://api.github.com/users/SwayStar123/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/SwayStar123/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SwayStar123/subscriptions",
"type": "User",
"url": "https://api.github.com/users/SwayStar123",
"user_view_type": "public"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | null | [] | 2025-06-04T13:19:15 | 2025-06-04T13:19:23 | null | NONE | null | null | null | null | ### Feature request
A number of processes parameter to the dataset.push_to_hub method
### Motivation
Shards are currently uploaded serially which makes it slow for many shards, uploading can be done in parallel and much faster
| null | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7591/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7591/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} |
https://api.github.com/repos/huggingface/datasets/issues/7590 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7590/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7590/comments | https://api.github.com/repos/huggingface/datasets/issues/7590/events | https://github.com/huggingface/datasets/issues/7590 | 3,101,654,892 | I_kwDODunzps64339s | 7,590 | `Sequence(Features(...))` causes PyArrow cast error in `load_dataset` despite correct schema. | {
"avatar_url": "https://avatars.githubusercontent.com/u/183279820?v=4",
"events_url": "https://api.github.com/users/AHS-uni/events{/privacy}",
"followers_url": "https://api.github.com/users/AHS-uni/followers",
"following_url": "https://api.github.com/users/AHS-uni/following{/other_user}",
"gists_url": "https://api.github.com/users/AHS-uni/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/AHS-uni",
"id": 183279820,
"login": "AHS-uni",
"node_id": "U_kgDOCuygzA",
"organizations_url": "https://api.github.com/users/AHS-uni/orgs",
"received_events_url": "https://api.github.com/users/AHS-uni/received_events",
"repos_url": "https://api.github.com/users/AHS-uni/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/AHS-uni/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AHS-uni/subscriptions",
"type": "User",
"url": "https://api.github.com/users/AHS-uni",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [
"Hi @lhoestq \n\nCould you help confirm whether this qualifies as a bug?\n\nIt looks like the issue stems from how `Sequence(Features(...))` is interpreted as a plain struct during schema inference, which leads to a mismatch when casting with PyArrow (especially with nested structs inside lists). From the description, this seems like an inconsistency with expected behavior.\n\nIf confirmed, I’d be happy to take a shot at investigating and potentially submitting a fix.\n\nAlso looping in @AHS-uni — could you kindly share a minimal JSONL example that reproduces this?\n\nThanks!",
"Hello @Flink-ddd \n\nI updated the minimal example and included both JSON and JSONL minimal examples in the Colab notebook. \n\nHere is the minimal JSON file for convenience (can't upload JSONL files).\n\n[mini.json](https://github.com/user-attachments/files/20535145/mini.json)\n\nI've also found a number of issues which describe a similar problem:\n\n[7569](https://github.com/huggingface/datasets/issues/7569) (Open)\n[7137](https://github.com/huggingface/datasets/issues/7137) (Open)\n[7501](https://github.com/huggingface/datasets/issues/7501) (Closed)\n[2434](https://github.com/huggingface/datasets/issues/2434) (Closed)\n\nThe closed issues don't really address the problem (IMO). [7501](https://github.com/huggingface/datasets/issues/7501) provides a workaround (using a Python list instead of `Sequence`), but it seem precarious. ",
"Hi ! `Sequence({...})` corresponds to a struct of lists ([docs](https://huggingface.co/docs/datasets/v3.6.0/en/package_reference/main_classes#datasets.Features)). This come from Tensorflow Datasets.\n\nIf you want to use a list of structs, you should use `[{...}]`, e.g.\n\n```python\nitem = {\n \"id\": Value(\"string\"),\n \"data\": Value(\"string\"),\n}\n\nfeatures = Features({\n \"list\": [item],\n})\n```"
] | 2025-05-29T22:53:36 | 2025-06-04T13:13:08 | null | NONE | null | null | null | null | ### Description
When loading a dataset with a field declared as a list of structs using `Sequence(Features(...))`, `load_dataset` incorrectly infers the field as a plain `struct<...>` instead of a `list<struct<...>>`. This leads to the following error:
```
ArrowNotImplementedError: Unsupported cast from list<item: struct<id: string, data: string>> to struct using function cast_struct
```
This occurs even when the `features` schema is explicitly provided and the dataset format supports nested structures natively (e.g., JSON, JSONL).
---
### Minimal Reproduction
[Colab Link.](https://colab.research.google.com/drive/1FZPQy6TP3jVd4B3mYKyfQaWNuOAvljUq?usp=sharing)
#### Dataset
```python
data = [
{
"list": [
{"id": "example1", "data": "text"},
]
},
]
```
#### Schema
```python
from datasets import Features, Sequence, Value
item = Features({
"id": Value("string"),
"data": Value("string"),
})
features = Features({
"list": Sequence(item),
})
```
---
### Tested File Formats
The same schema was tested across different formats:
| Format | Method | Result |
| --------- | --------------------------- | ------------------- |
| JSONL | `load_dataset("json", ...)` | Arrow cast error |
| JSON | `load_dataset("json", ...)` | Arrow cast error |
| In-memory | `Dataset.from_list(...)` | Works as expected |
The issue seems not to be in the schema or the data, but in how `load_dataset()` handles the `Sequence(Features(...))` pattern when parsing from files (specifically JSON and JSONL).
---
### Expected Behavior
If `features` is explicitly defined as:
```python
Features({"list": Sequence(Features({...}))})
```
Then the data should load correctly across all backends — including from JSON and JSONL — without any Arrow casting errors. This works correctly when loading from memory via `Dataset.from_list`.
---
### Environment
* `datasets`: 3.6.0
* `pyarrow`: 20.0.0
* Python: 3.12.10
* OS: Ubuntu 24.04.2 LTS
* Notebook: \[Colab test notebook available]
---
| null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7590/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7590/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} |
https://api.github.com/repos/huggingface/datasets/issues/7589 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7589/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7589/comments | https://api.github.com/repos/huggingface/datasets/issues/7589/events | https://github.com/huggingface/datasets/pull/7589 | 3,101,119,704 | PR_kwDODunzps6YKiyL | 7,589 | feat: use content defined chunking | {
"avatar_url": "https://avatars.githubusercontent.com/u/961747?v=4",
"events_url": "https://api.github.com/users/kszucs/events{/privacy}",
"followers_url": "https://api.github.com/users/kszucs/followers",
"following_url": "https://api.github.com/users/kszucs/following{/other_user}",
"gists_url": "https://api.github.com/users/kszucs/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/kszucs",
"id": 961747,
"login": "kszucs",
"node_id": "MDQ6VXNlcjk2MTc0Nw==",
"organizations_url": "https://api.github.com/users/kszucs/orgs",
"received_events_url": "https://api.github.com/users/kszucs/received_events",
"repos_url": "https://api.github.com/users/kszucs/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/kszucs/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kszucs/subscriptions",
"type": "User",
"url": "https://api.github.com/users/kszucs",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7589). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-05-29T18:19:41 | 2025-06-08T15:06:05 | null | COLLABORATOR | null | null | 1 | {
"diff_url": "https://github.com/huggingface/datasets/pull/7589.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7589",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/7589.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7589"
} | WIP:
- [x] set the parameters in `io.parquet.ParquetDatasetReader`
- [x] set the parameters in `arrow_writer.ParquetWriter`
It requires a new pyarrow pin ">=21.0.0" which is not yet released. | null | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7589/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7589/timeline | null | null | null |
https://api.github.com/repos/huggingface/datasets/issues/7588 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7588/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7588/comments | https://api.github.com/repos/huggingface/datasets/issues/7588/events | https://github.com/huggingface/datasets/issues/7588 | 3,094,012,025 | I_kwDODunzps64auB5 | 7,588 | ValueError: Invalid pattern: '**' can only be an entire path component [Colab] | {
"avatar_url": "https://avatars.githubusercontent.com/u/43061081?v=4",
"events_url": "https://api.github.com/users/wkambale/events{/privacy}",
"followers_url": "https://api.github.com/users/wkambale/followers",
"following_url": "https://api.github.com/users/wkambale/following{/other_user}",
"gists_url": "https://api.github.com/users/wkambale/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/wkambale",
"id": 43061081,
"login": "wkambale",
"node_id": "MDQ6VXNlcjQzMDYxMDgx",
"organizations_url": "https://api.github.com/users/wkambale/orgs",
"received_events_url": "https://api.github.com/users/wkambale/received_events",
"repos_url": "https://api.github.com/users/wkambale/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/wkambale/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wkambale/subscriptions",
"type": "User",
"url": "https://api.github.com/users/wkambale",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"Could you please run the following code snippet in your environment and share the exact output? This will help check for any compatibility issues within the env itself. \n\n```\nimport datasets\nimport huggingface_hub\nimport fsspec\n\nprint(\"datasets version:\", datasets.__version__)\nprint(\"huggingface_hub version:\", huggingface_hub.__version__)\nprint(\"fsspec version:\", fsspec.__version__)\n```",
"```bash\ndatasets version: 2.14.4\nhuggingface_hub version: 0.31.4\nfsspec version: 2025.3.2\n```",
"Version 2.14.4 is not the latest version available, in fact it is from August 08, 2023 (you can check here: https://pypi.org/project/datasets/#history)\n\nUse pip install datasets==3.6.0 to install a more recent version (from May 7, 2025)\n\nI also had the same problem with Colab, after updating to the latest version it was solved.\n\nI hope it helps",
"thank you @CleitonOERocha. it sure did help.\n\nupdating `datasets` to v3.6.0 and keeping `fsspec` on v2025.3.2 eliminates the issue.",
"Very helpful, thank you!"
] | 2025-05-27T13:46:05 | 2025-05-30T13:22:52 | 2025-05-30T01:26:30 | NONE | null | null | null | null | ### Describe the bug
I have a dataset on HF [here](https://huggingface.co/datasets/kambale/luganda-english-parallel-corpus) that i've previously used to train a translation model [here](https://huggingface.co/kambale/pearl-11m-translate).
now i changed a few hyperparameters to increase number of tokens for the model, increase Transformer layers, and all
however, when i try to load the dataset, this error keeps coming up.. i have tried everything.. i have re-written the code a hundred times, and this keep coming up
### Steps to reproduce the bug
Imports:
```bash
!pip install datasets huggingface_hub fsspec
```
Python code:
```python
from datasets import load_dataset
HF_DATASET_NAME = "kambale/luganda-english-parallel-corpus"
# Load the dataset
try:
if not HF_DATASET_NAME or HF_DATASET_NAME == "YOUR_HF_DATASET_NAME":
raise ValueError(
"Please provide a valid Hugging Face dataset name."
)
dataset = load_dataset(HF_DATASET_NAME)
# Omitted code as the error happens on the line above
except ValueError as ve:
print(f"Configuration Error: {ve}")
raise
except Exception as e:
print(f"An error occurred while loading the dataset '{HF_DATASET_NAME}': {e}")
raise e
```
now, i have tried going through this [issue](https://github.com/huggingface/datasets/issues/6737) and nothing helps
### Expected behavior
loading the dataset successfully and perform splits (train, test, validation)
### Environment info
from the imports, i do not install specific versions of these libraries, so the latest or available version is installed
* `datasets` version: latest
* `Platform`: Google Colab
* `Hardware`: NVIDIA A100 GPU
* `Python` version: latest
* `huggingface_hub` version: latest
* `fsspec` version: latest | {
"avatar_url": "https://avatars.githubusercontent.com/u/43061081?v=4",
"events_url": "https://api.github.com/users/wkambale/events{/privacy}",
"followers_url": "https://api.github.com/users/wkambale/followers",
"following_url": "https://api.github.com/users/wkambale/following{/other_user}",
"gists_url": "https://api.github.com/users/wkambale/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/wkambale",
"id": 43061081,
"login": "wkambale",
"node_id": "MDQ6VXNlcjQzMDYxMDgx",
"organizations_url": "https://api.github.com/users/wkambale/orgs",
"received_events_url": "https://api.github.com/users/wkambale/received_events",
"repos_url": "https://api.github.com/users/wkambale/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/wkambale/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wkambale/subscriptions",
"type": "User",
"url": "https://api.github.com/users/wkambale",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7588/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7588/timeline | null | completed | {
"completed": 0,
"percent_completed": 0,
"total": 0
} |
https://api.github.com/repos/huggingface/datasets/issues/7587 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7587/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7587/comments | https://api.github.com/repos/huggingface/datasets/issues/7587/events | https://github.com/huggingface/datasets/pull/7587 | 3,091,834,987 | PR_kwDODunzps6XrB8F | 7,587 | load_dataset splits typing | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7587). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-05-26T18:28:40 | 2025-05-26T18:31:10 | 2025-05-26T18:29:57 | MEMBER | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/7587.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7587",
"merged_at": "2025-05-26T18:29:57Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7587.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7587"
} | close https://github.com/huggingface/datasets/issues/7583 | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7587/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7587/timeline | null | null | null |
https://api.github.com/repos/huggingface/datasets/issues/7586 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7586/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7586/comments | https://api.github.com/repos/huggingface/datasets/issues/7586/events | https://github.com/huggingface/datasets/issues/7586 | 3,091,320,431 | I_kwDODunzps64Qc5v | 7,586 | help is appreciated | {
"avatar_url": "https://avatars.githubusercontent.com/u/54931785?v=4",
"events_url": "https://api.github.com/users/rajasekarnp1/events{/privacy}",
"followers_url": "https://api.github.com/users/rajasekarnp1/followers",
"following_url": "https://api.github.com/users/rajasekarnp1/following{/other_user}",
"gists_url": "https://api.github.com/users/rajasekarnp1/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/rajasekarnp1",
"id": 54931785,
"login": "rajasekarnp1",
"node_id": "MDQ6VXNlcjU0OTMxNzg1",
"organizations_url": "https://api.github.com/users/rajasekarnp1/orgs",
"received_events_url": "https://api.github.com/users/rajasekarnp1/received_events",
"repos_url": "https://api.github.com/users/rajasekarnp1/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/rajasekarnp1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rajasekarnp1/subscriptions",
"type": "User",
"url": "https://api.github.com/users/rajasekarnp1",
"user_view_type": "public"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | null | [
"how is this related to this repository ?"
] | 2025-05-26T14:00:42 | 2025-05-26T18:21:57 | null | NONE | null | null | null | null | ### Feature request
https://github.com/rajasekarnp1/neural-audio-upscaler/tree/main
### Motivation
ai model develpment and audio
### Your contribution
ai model develpment and audio | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7586/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7586/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} |
https://api.github.com/repos/huggingface/datasets/issues/7585 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7585/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7585/comments | https://api.github.com/repos/huggingface/datasets/issues/7585/events | https://github.com/huggingface/datasets/pull/7585 | 3,091,227,921 | PR_kwDODunzps6Xo-Tw | 7,585 | Avoid multiple default config names | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7585). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-05-26T13:27:59 | 2025-06-05T12:41:54 | 2025-06-05T12:41:52 | MEMBER | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/7585.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7585",
"merged_at": "2025-06-05T12:41:52Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7585.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7585"
} | Fix duplicating default config names.
Currently, when calling `push_to_hub(set_default=True` with 2 different config names, both are set as default.
Moreover, this will generate an error next time we try to push another default config name, raised by `MetadataConfigs.get_default_config_name`:
https://github.com/huggingface/datasets/blob/da1db8a5b89fc0badaa0f571b36e122e52ae8c61/src/datasets/arrow_dataset.py#L5757
https://github.com/huggingface/datasets/blob/da1db8a5b89fc0badaa0f571b36e122e52ae8c61/src/datasets/utils/metadata.py#L186-L188 | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7585/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7585/timeline | null | null | null |
https://api.github.com/repos/huggingface/datasets/issues/7584 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7584/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7584/comments | https://api.github.com/repos/huggingface/datasets/issues/7584/events | https://github.com/huggingface/datasets/issues/7584 | 3,090,255,023 | I_kwDODunzps64MYyv | 7,584 | Add LMDB format support | {
"avatar_url": "https://avatars.githubusercontent.com/u/30512160?v=4",
"events_url": "https://api.github.com/users/trotsky1997/events{/privacy}",
"followers_url": "https://api.github.com/users/trotsky1997/followers",
"following_url": "https://api.github.com/users/trotsky1997/following{/other_user}",
"gists_url": "https://api.github.com/users/trotsky1997/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/trotsky1997",
"id": 30512160,
"login": "trotsky1997",
"node_id": "MDQ6VXNlcjMwNTEyMTYw",
"organizations_url": "https://api.github.com/users/trotsky1997/orgs",
"received_events_url": "https://api.github.com/users/trotsky1997/received_events",
"repos_url": "https://api.github.com/users/trotsky1997/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/trotsky1997/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/trotsky1997/subscriptions",
"type": "User",
"url": "https://api.github.com/users/trotsky1997",
"user_view_type": "public"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | null | [
"Hi ! Can you explain what's your use case ? Is it about converting LMDB to Dataset objects (i.e. converting to Arrow) ?"
] | 2025-05-26T07:10:13 | 2025-05-26T18:23:37 | null | NONE | null | null | null | null | ### Feature request
Add LMDB format support for large memory-mapping files
### Motivation
Add LMDB format support for large memory-mapping files
### Your contribution
I'm trying to add it | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7584/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7584/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} |
https://api.github.com/repos/huggingface/datasets/issues/7583 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7583/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7583/comments | https://api.github.com/repos/huggingface/datasets/issues/7583/events | https://github.com/huggingface/datasets/issues/7583 | 3,088,987,757 | I_kwDODunzps64HjZt | 7,583 | load_dataset type stubs reject List[str] for split parameter, but runtime supports it | {
"avatar_url": "https://avatars.githubusercontent.com/u/25069969?v=4",
"events_url": "https://api.github.com/users/hierr/events{/privacy}",
"followers_url": "https://api.github.com/users/hierr/followers",
"following_url": "https://api.github.com/users/hierr/following{/other_user}",
"gists_url": "https://api.github.com/users/hierr/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/hierr",
"id": 25069969,
"login": "hierr",
"node_id": "MDQ6VXNlcjI1MDY5OTY5",
"organizations_url": "https://api.github.com/users/hierr/orgs",
"received_events_url": "https://api.github.com/users/hierr/received_events",
"repos_url": "https://api.github.com/users/hierr/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/hierr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hierr/subscriptions",
"type": "User",
"url": "https://api.github.com/users/hierr",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [] | 2025-05-25T02:33:18 | 2025-05-26T18:29:58 | 2025-05-26T18:29:58 | NONE | null | null | null | null | ### Describe the bug
The [load_dataset](https://huggingface.co/docs/datasets/v3.6.0/en/package_reference/loading_methods#datasets.load_dataset) method accepts a `List[str]` as the split parameter at runtime, however, the current type stubs restrict the split parameter to `Union[str, Split, None]`. This causes type checkers like Pylance to raise `reportArgumentType` errors when passing a list of strings, even though it works as intended at runtime.
### Steps to reproduce the bug
1. Use load_dataset with multiple splits e.g.:
```
from datasets import load_dataset
ds_train, ds_val, ds_test = load_dataset(
"Silly-Machine/TuPyE-Dataset",
"binary",
split=["train[:75%]", "train[75%:]", "test"]
)
```
2. Observe that code executes correctly at runtime and Pylance raises `Argument of type "List[str]" cannot be assigned to parameter "split" of type "str | Split | None"`
### Expected behavior
The type stubs for [load_dataset](https://huggingface.co/docs/datasets/v3.6.0/en/package_reference/loading_methods#datasets.load_dataset) should accept `Union[str, Split, List[str], None]` or more specific overloads for the split parameter to correctly represent runtime behavior.
### Environment info
- `datasets` version: 3.6.0
- Platform: Linux-5.15.167.4-microsoft-standard-WSL2-x86_64-with-glibc2.39
- Python version: 3.12.7
- `huggingface_hub` version: 0.32.0
- PyArrow version: 20.0.0
- Pandas version: 2.2.3
- `fsspec` version: 2025.3.0 | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7583/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7583/timeline | null | completed | {
"completed": 0,
"percent_completed": 0,
"total": 0
} |
https://api.github.com/repos/huggingface/datasets/issues/7582 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7582/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7582/comments | https://api.github.com/repos/huggingface/datasets/issues/7582/events | https://github.com/huggingface/datasets/pull/7582 | 3,083,515,643 | PR_kwDODunzps6XPIt7 | 7,582 | fix: Add embed_storage in Pdf feature | {
"avatar_url": "https://avatars.githubusercontent.com/u/5564745?v=4",
"events_url": "https://api.github.com/users/AndreaFrancis/events{/privacy}",
"followers_url": "https://api.github.com/users/AndreaFrancis/followers",
"following_url": "https://api.github.com/users/AndreaFrancis/following{/other_user}",
"gists_url": "https://api.github.com/users/AndreaFrancis/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/AndreaFrancis",
"id": 5564745,
"login": "AndreaFrancis",
"node_id": "MDQ6VXNlcjU1NjQ3NDU=",
"organizations_url": "https://api.github.com/users/AndreaFrancis/orgs",
"received_events_url": "https://api.github.com/users/AndreaFrancis/received_events",
"repos_url": "https://api.github.com/users/AndreaFrancis/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/AndreaFrancis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AndreaFrancis/subscriptions",
"type": "User",
"url": "https://api.github.com/users/AndreaFrancis",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7582). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-05-22T14:06:29 | 2025-05-22T14:17:38 | 2025-05-22T14:17:36 | CONTRIBUTOR | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/7582.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7582",
"merged_at": "2025-05-22T14:17:36Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7582.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7582"
} | Add missing `embed_storage` method in Pdf feature (Same as in Audio and Image) | {
"avatar_url": "https://avatars.githubusercontent.com/u/5564745?v=4",
"events_url": "https://api.github.com/users/AndreaFrancis/events{/privacy}",
"followers_url": "https://api.github.com/users/AndreaFrancis/followers",
"following_url": "https://api.github.com/users/AndreaFrancis/following{/other_user}",
"gists_url": "https://api.github.com/users/AndreaFrancis/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/AndreaFrancis",
"id": 5564745,
"login": "AndreaFrancis",
"node_id": "MDQ6VXNlcjU1NjQ3NDU=",
"organizations_url": "https://api.github.com/users/AndreaFrancis/orgs",
"received_events_url": "https://api.github.com/users/AndreaFrancis/received_events",
"repos_url": "https://api.github.com/users/AndreaFrancis/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/AndreaFrancis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AndreaFrancis/subscriptions",
"type": "User",
"url": "https://api.github.com/users/AndreaFrancis",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7582/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7582/timeline | null | null | null |
https://api.github.com/repos/huggingface/datasets/issues/7581 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7581/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7581/comments | https://api.github.com/repos/huggingface/datasets/issues/7581/events | https://github.com/huggingface/datasets/pull/7581 | 3,083,080,413 | PR_kwDODunzps6XNpm0 | 7,581 | Add missing property on `RepeatExamplesIterable` | {
"avatar_url": "https://avatars.githubusercontent.com/u/42788329?v=4",
"events_url": "https://api.github.com/users/SilvanCodes/events{/privacy}",
"followers_url": "https://api.github.com/users/SilvanCodes/followers",
"following_url": "https://api.github.com/users/SilvanCodes/following{/other_user}",
"gists_url": "https://api.github.com/users/SilvanCodes/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/SilvanCodes",
"id": 42788329,
"login": "SilvanCodes",
"node_id": "MDQ6VXNlcjQyNzg4MzI5",
"organizations_url": "https://api.github.com/users/SilvanCodes/orgs",
"received_events_url": "https://api.github.com/users/SilvanCodes/received_events",
"repos_url": "https://api.github.com/users/SilvanCodes/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/SilvanCodes/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SilvanCodes/subscriptions",
"type": "User",
"url": "https://api.github.com/users/SilvanCodes",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [] | 2025-05-22T11:41:07 | 2025-06-05T12:41:30 | 2025-06-05T12:41:29 | CONTRIBUTOR | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/7581.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7581",
"merged_at": "2025-06-05T12:41:29Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7581.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7581"
} | Fixes #7561 | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7581/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7581/timeline | null | null | null |
https://api.github.com/repos/huggingface/datasets/issues/7580 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7580/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7580/comments | https://api.github.com/repos/huggingface/datasets/issues/7580/events | https://github.com/huggingface/datasets/issues/7580 | 3,082,993,027 | I_kwDODunzps63wr2D | 7,580 | Requesting a specific split (eg: test) still downloads all (train, test, val) data when streaming=False. | {
"avatar_url": "https://avatars.githubusercontent.com/u/48768216?v=4",
"events_url": "https://api.github.com/users/s3pi/events{/privacy}",
"followers_url": "https://api.github.com/users/s3pi/followers",
"following_url": "https://api.github.com/users/s3pi/following{/other_user}",
"gists_url": "https://api.github.com/users/s3pi/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/s3pi",
"id": 48768216,
"login": "s3pi",
"node_id": "MDQ6VXNlcjQ4NzY4MjE2",
"organizations_url": "https://api.github.com/users/s3pi/orgs",
"received_events_url": "https://api.github.com/users/s3pi/received_events",
"repos_url": "https://api.github.com/users/s3pi/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/s3pi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/s3pi/subscriptions",
"type": "User",
"url": "https://api.github.com/users/s3pi",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [
"Hi ! There was a PR open to improve this: https://github.com/huggingface/datasets/pull/6832 \nbut it hasn't been continued so far.\n\nIt would be a cool improvement though !"
] | 2025-05-22T11:08:16 | 2025-05-26T18:40:31 | null | NONE | null | null | null | null | ### Describe the bug
When using load_dataset() from the datasets library (in load.py), specifying a particular split (e.g., split="train") still results in downloading data for all splits when streaming=False. This happens during the builder_instance.download_and_prepare() call.
This behavior leads to unnecessary bandwidth usage and longer download times, especially for large datasets, even if the user only intends to use a single split.
### Steps to reproduce the bug
dataset_name = "skbose/indian-english-nptel-v0"
dataset = load_dataset(dataset_name, token=hf_token, split="test")
### Expected behavior
Optimize the download logic so that only the required split is downloaded when streaming=False when a specific split is provided.
### Environment info
Dataset: skbose/indian-english-nptel-v0
Platform: M1 Apple Silicon
Python verison: 3.12.9
datasets>=3.5.0 | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7580/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7580/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} |
https://api.github.com/repos/huggingface/datasets/issues/7579 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7579/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7579/comments | https://api.github.com/repos/huggingface/datasets/issues/7579/events | https://github.com/huggingface/datasets/pull/7579 | 3,081,849,022 | PR_kwDODunzps6XJerX | 7,579 | Fix typos in PDF and Video documentation | {
"avatar_url": "https://avatars.githubusercontent.com/u/5564745?v=4",
"events_url": "https://api.github.com/users/AndreaFrancis/events{/privacy}",
"followers_url": "https://api.github.com/users/AndreaFrancis/followers",
"following_url": "https://api.github.com/users/AndreaFrancis/following{/other_user}",
"gists_url": "https://api.github.com/users/AndreaFrancis/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/AndreaFrancis",
"id": 5564745,
"login": "AndreaFrancis",
"node_id": "MDQ6VXNlcjU1NjQ3NDU=",
"organizations_url": "https://api.github.com/users/AndreaFrancis/orgs",
"received_events_url": "https://api.github.com/users/AndreaFrancis/received_events",
"repos_url": "https://api.github.com/users/AndreaFrancis/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/AndreaFrancis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AndreaFrancis/subscriptions",
"type": "User",
"url": "https://api.github.com/users/AndreaFrancis",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7579). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-05-22T02:27:40 | 2025-05-22T12:53:49 | 2025-05-22T12:53:47 | CONTRIBUTOR | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/7579.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7579",
"merged_at": "2025-05-22T12:53:47Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7579.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7579"
} | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7579/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7579/timeline | null | null | null |
https://api.github.com/repos/huggingface/datasets/issues/7577 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7577/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7577/comments | https://api.github.com/repos/huggingface/datasets/issues/7577/events | https://github.com/huggingface/datasets/issues/7577 | 3,080,833,740 | I_kwDODunzps63ocrM | 7,577 | arrow_schema is not compatible with list | {
"avatar_url": "https://avatars.githubusercontent.com/u/164412025?v=4",
"events_url": "https://api.github.com/users/jonathanshen-upwork/events{/privacy}",
"followers_url": "https://api.github.com/users/jonathanshen-upwork/followers",
"following_url": "https://api.github.com/users/jonathanshen-upwork/following{/other_user}",
"gists_url": "https://api.github.com/users/jonathanshen-upwork/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jonathanshen-upwork",
"id": 164412025,
"login": "jonathanshen-upwork",
"node_id": "U_kgDOCcy6eQ",
"organizations_url": "https://api.github.com/users/jonathanshen-upwork/orgs",
"received_events_url": "https://api.github.com/users/jonathanshen-upwork/received_events",
"repos_url": "https://api.github.com/users/jonathanshen-upwork/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jonathanshen-upwork/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jonathanshen-upwork/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jonathanshen-upwork",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"Thanks for reporting, I'll look into it",
"Actually it looks like you just forgot parenthesis:\n\n```diff\n- f = datasets.Features({'x': list[datasets.Value(dtype='int32')]})\n+ f = datasets.Features({'x': list([datasets.Value(dtype='int32')])})\n```\n\nor simply using the `[ ]` syntax:\n\n```python\nf = datasets.Features({'x':[datasets.Value(dtype='int32')]})\n```\n\nI'm closing this issue if you don't mind",
"Ah is that what the syntax is? I don't think I was able to find an actual example of it so I assumed it was in the same way that you specify types eg. `list[int]`. This is good to know, thanks."
] | 2025-05-21T16:37:01 | 2025-05-26T18:49:51 | 2025-05-26T18:32:55 | NONE | null | null | null | null | ### Describe the bug
```
import datasets
f = datasets.Features({'x': list[datasets.Value(dtype='int32')]})
f.arrow_schema
Traceback (most recent call last):
File "datasets/features/features.py", line 1826, in arrow_schema
return pa.schema(self.type).with_metadata({"huggingface": json.dumps(hf_metadata)})
^^^^^^^^^
File "datasets/features/features.py", line 1815, in type
return get_nested_type(self)
^^^^^^^^^^^^^^^^^^^^^
File "datasets/features/features.py", line 1252, in get_nested_type
return pa.struct(
^^^^^^^^^^
File "pyarrow/types.pxi", line 5406, in pyarrow.lib.struct
File "pyarrow/types.pxi", line 3890, in pyarrow.lib.field
File "pyarrow/types.pxi", line 5918, in pyarrow.lib.ensure_type
TypeError: DataType expected, got <class 'list'>
```
The following works
```
f = datasets.Features({'x': datasets.LargeList(datasets.Value(dtype='int32'))})
```
### Expected behavior
according to https://github.com/huggingface/datasets/blob/458f45a22c3cc9aea5f442f6f519333dcfeae9b9/src/datasets/features/features.py#L1765 python list should be a valid type specification for features
### Environment info
- `datasets` version: 3.5.1
- Platform: macOS-15.5-arm64-arm-64bit
- Python version: 3.12.9
- `huggingface_hub` version: 0.30.2
- PyArrow version: 19.0.1
- Pandas version: 2.2.3
- `fsspec` version: 2024.12.0 | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7577/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7577/timeline | null | completed | {
"completed": 0,
"percent_completed": 0,
"total": 0
} |
https://api.github.com/repos/huggingface/datasets/issues/7576 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7576/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7576/comments | https://api.github.com/repos/huggingface/datasets/issues/7576/events | https://github.com/huggingface/datasets/pull/7576 | 3,080,450,538 | PR_kwDODunzps6XEuMz | 7,576 | Fix regex library warnings | {
"avatar_url": "https://avatars.githubusercontent.com/u/35470921?v=4",
"events_url": "https://api.github.com/users/emmanuel-ferdman/events{/privacy}",
"followers_url": "https://api.github.com/users/emmanuel-ferdman/followers",
"following_url": "https://api.github.com/users/emmanuel-ferdman/following{/other_user}",
"gists_url": "https://api.github.com/users/emmanuel-ferdman/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/emmanuel-ferdman",
"id": 35470921,
"login": "emmanuel-ferdman",
"node_id": "MDQ6VXNlcjM1NDcwOTIx",
"organizations_url": "https://api.github.com/users/emmanuel-ferdman/orgs",
"received_events_url": "https://api.github.com/users/emmanuel-ferdman/received_events",
"repos_url": "https://api.github.com/users/emmanuel-ferdman/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/emmanuel-ferdman/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/emmanuel-ferdman/subscriptions",
"type": "User",
"url": "https://api.github.com/users/emmanuel-ferdman",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7576). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-05-21T14:31:58 | 2025-06-05T13:35:16 | 2025-06-05T12:37:55 | CONTRIBUTOR | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/7576.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7576",
"merged_at": "2025-06-05T12:37:55Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7576.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7576"
} | # PR Summary
This small PR resolves the regex library warnings showing starting Python3.11:
```python
DeprecationWarning: 'count' is passed as positional argument
``` | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7576/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7576/timeline | null | null | null |
https://api.github.com/repos/huggingface/datasets/issues/7575 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7575/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7575/comments | https://api.github.com/repos/huggingface/datasets/issues/7575/events | https://github.com/huggingface/datasets/pull/7575 | 3,080,228,718 | PR_kwDODunzps6XD9gM | 7,575 | [MINOR:TYPO] Update save_to_disk docstring | {
"avatar_url": "https://avatars.githubusercontent.com/u/3664563?v=4",
"events_url": "https://api.github.com/users/cakiki/events{/privacy}",
"followers_url": "https://api.github.com/users/cakiki/followers",
"following_url": "https://api.github.com/users/cakiki/following{/other_user}",
"gists_url": "https://api.github.com/users/cakiki/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/cakiki",
"id": 3664563,
"login": "cakiki",
"node_id": "MDQ6VXNlcjM2NjQ1NjM=",
"organizations_url": "https://api.github.com/users/cakiki/orgs",
"received_events_url": "https://api.github.com/users/cakiki/received_events",
"repos_url": "https://api.github.com/users/cakiki/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/cakiki/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cakiki/subscriptions",
"type": "User",
"url": "https://api.github.com/users/cakiki",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [] | 2025-05-21T13:22:24 | 2025-06-05T12:39:13 | 2025-06-05T12:39:13 | CONTRIBUTOR | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/7575.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7575",
"merged_at": "2025-06-05T12:39:13Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7575.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7575"
} | r/hub/filesystem in save_to_disk | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7575/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7575/timeline | null | null | null |
https://api.github.com/repos/huggingface/datasets/issues/7574 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7574/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7574/comments | https://api.github.com/repos/huggingface/datasets/issues/7574/events | https://github.com/huggingface/datasets/issues/7574 | 3,079,641,072 | I_kwDODunzps63j5fw | 7,574 | Missing multilingual directions in IWSLT2017 dataset's processing script | {
"avatar_url": "https://avatars.githubusercontent.com/u/79297451?v=4",
"events_url": "https://api.github.com/users/andy-joy-25/events{/privacy}",
"followers_url": "https://api.github.com/users/andy-joy-25/followers",
"following_url": "https://api.github.com/users/andy-joy-25/following{/other_user}",
"gists_url": "https://api.github.com/users/andy-joy-25/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/andy-joy-25",
"id": 79297451,
"login": "andy-joy-25",
"node_id": "MDQ6VXNlcjc5Mjk3NDUx",
"organizations_url": "https://api.github.com/users/andy-joy-25/orgs",
"received_events_url": "https://api.github.com/users/andy-joy-25/received_events",
"repos_url": "https://api.github.com/users/andy-joy-25/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/andy-joy-25/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/andy-joy-25/subscriptions",
"type": "User",
"url": "https://api.github.com/users/andy-joy-25",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [
"I have opened 2 PRs on the Hub: `https://huggingface.co/datasets/IWSLT/iwslt2017/discussions/7` and `https://huggingface.co/datasets/IWSLT/iwslt2017/discussions/8` to resolve this issue",
"cool ! I pinged the owners of the dataset on HF to merge your PRs :)"
] | 2025-05-21T09:53:17 | 2025-05-26T18:36:38 | null | NONE | null | null | null | null | ### Describe the bug
Hi,
Upon using `iwslt2017.py` in `IWSLT/iwslt2017` on the Hub for loading the datasets, I am unable to obtain the datasets for the language pairs `de-it`, `de-ro`, `de-nl`, `it-de`, `nl-de`, and `ro-de` using it. These 6 pairs do not show up when using `get_dataset_config_names()` to obtain the list of all the configs present in `IWSLT/iwslt2017`. This should not be the case since as mentioned in their original paper (please see https://aclanthology.org/2017.iwslt-1.1.pdf), the authors specify that "_this year we proposed the multilingual translation between any pair of languages from {Dutch, English, German, Italian, Romanian}..._" and because these datasets are indeed present in `data/2017-01-trnmted/texts/DeEnItNlRo/DeEnItNlRo/DeEnItNlRo-DeEnItNlRo.zip`.
Best Regards,
Anand
### Steps to reproduce the bug
Check the output of `get_dataset_config_names("IWSLT/iwslt2017", trust_remote_code=True)`: only 24 language pairs are present and the following 6 config names are absent: `iwslt2017-de-it`, `iwslt2017-de-ro`, `iwslt2017-de-nl`, `iwslt2017-it-de`, `iwslt2017-nl-de`, and `iwslt2017-ro-de`.
### Expected behavior
The aforementioned 6 language pairs should also be present and hence, all these 6 language pairs' IWSLT2017 datasets must also be available for further use.
I would suggest removing `de` from the `BI_LANGUAGES` list and moving it over to the `MULTI_LANGUAGES` list instead in `iwslt2017.py` to account for all the 6 missing language pairs (the same `de-en` dataset is present in both `data/2017-01-trnmted/texts/DeEnItNlRo/DeEnItNlRo/DeEnItNlRo-DeEnItNlRo.zip` and `data/2017-01-trnted/texts/de/en/de-en.zip` but the `de-ro`, `de-nl`, `it-de`, `nl-de`, and `ro-de` datasets are only present in `data/2017-01-trnmted/texts/DeEnItNlRo/DeEnItNlRo/DeEnItNlRo-DeEnItNlRo.zip`: so, its unclear why the following comment: _`# XXX: Artificially removed DE from here, as it also exists within bilingual data`_ has been added as `L71` in `iwslt2017.py`). The `README.md` file in `IWSLT/iwslt2017`must then be re-created using `datasets-cli test path/to/iwslt2017.py --save_info --all_configs` to pass all split size verification checks for the 6 new language pairs which were previously non-existent.
### Environment info
- `datasets` version: 3.5.0
- Platform: Linux-6.8.0-56-generic-x86_64-with-glibc2.39
- Python version: 3.12.3
- `huggingface_hub` version: 0.30.1
- PyArrow version: 19.0.1
- Pandas version: 2.2.3
- `fsspec` version: 2024.12.0 | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7574/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7574/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} |
https://api.github.com/repos/huggingface/datasets/issues/7573 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7573/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7573/comments | https://api.github.com/repos/huggingface/datasets/issues/7573/events | https://github.com/huggingface/datasets/issues/7573 | 3,076,415,382 | I_kwDODunzps63Xl-W | 7,573 | No Samsum dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/17688220?v=4",
"events_url": "https://api.github.com/users/IgorKasianenko/events{/privacy}",
"followers_url": "https://api.github.com/users/IgorKasianenko/followers",
"following_url": "https://api.github.com/users/IgorKasianenko/following{/other_user}",
"gists_url": "https://api.github.com/users/IgorKasianenko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/IgorKasianenko",
"id": 17688220,
"login": "IgorKasianenko",
"node_id": "MDQ6VXNlcjE3Njg4MjIw",
"organizations_url": "https://api.github.com/users/IgorKasianenko/orgs",
"received_events_url": "https://api.github.com/users/IgorKasianenko/received_events",
"repos_url": "https://api.github.com/users/IgorKasianenko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/IgorKasianenko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/IgorKasianenko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/IgorKasianenko",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [
"According to the following https://huggingface.co/posts/seawolf2357/424129432408590, as of now the dataset seems to be inaccessible.\n\n@IgorKasianenko, would https://huggingface.co/datasets/knkarthick/samsum suffice for your purpose?\n",
"Thanks @SP1029 for the update!\nThat will work for now, using it as replacement. Is there a officially recommended way to maintain the CC licensed dataset under the organization account? \nFeel free to close this issue"
] | 2025-05-20T09:54:35 | 2025-06-09T08:58:24 | null | NONE | null | null | null | null | ### Describe the bug
https://huggingface.co/datasets/Samsung/samsum dataset not found error 404
Originated from https://github.com/meta-llama/llama-cookbook/issues/948
### Steps to reproduce the bug
go to website https://huggingface.co/datasets/Samsung/samsum
see the error
also downloading it with python throws
```
Couldn't find 'Samsung/samsum' on the Hugging Face Hub either: FileNotFoundError: Samsung/samsum@f00baf5a7d4abfec6820415493bcb52c587788e6/samsum.py (repository not found)
```
### Expected behavior
Dataset exists
### Environment info
```
- `datasets` version: 3.2.0
- Platform: macOS-15.4.1-arm64-arm-64bit
- Python version: 3.12.2
- `huggingface_hub` version: 0.26.5
- PyArrow version: 16.1.0
- Pandas version: 2.2.3
- `fsspec` version: 2024.9.0
``` | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7573/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7573/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} |
https://api.github.com/repos/huggingface/datasets/issues/7572 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7572/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7572/comments | https://api.github.com/repos/huggingface/datasets/issues/7572/events | https://github.com/huggingface/datasets/pull/7572 | 3,074,529,251 | PR_kwDODunzps6WwsZB | 7,572 | Fixed typos | {
"avatar_url": "https://avatars.githubusercontent.com/u/47208659?v=4",
"events_url": "https://api.github.com/users/TopCoder2K/events{/privacy}",
"followers_url": "https://api.github.com/users/TopCoder2K/followers",
"following_url": "https://api.github.com/users/TopCoder2K/following{/other_user}",
"gists_url": "https://api.github.com/users/TopCoder2K/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/TopCoder2K",
"id": 47208659,
"login": "TopCoder2K",
"node_id": "MDQ6VXNlcjQ3MjA4NjU5",
"organizations_url": "https://api.github.com/users/TopCoder2K/orgs",
"received_events_url": "https://api.github.com/users/TopCoder2K/received_events",
"repos_url": "https://api.github.com/users/TopCoder2K/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/TopCoder2K/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TopCoder2K/subscriptions",
"type": "User",
"url": "https://api.github.com/users/TopCoder2K",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"@lhoestq, mentioning in case you haven't seen this PR. The contribution is very small and easy to check :)"
] | 2025-05-19T17:16:59 | 2025-06-05T12:25:42 | 2025-06-05T12:25:41 | CONTRIBUTOR | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/7572.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7572",
"merged_at": "2025-06-05T12:25:41Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7572.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7572"
} | More info: [comment](https://github.com/huggingface/datasets/pull/7564#issuecomment-2863391781). | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7572/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7572/timeline | null | null | null |
https://api.github.com/repos/huggingface/datasets/issues/7571 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7571/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7571/comments | https://api.github.com/repos/huggingface/datasets/issues/7571/events | https://github.com/huggingface/datasets/pull/7571 | 3,074,116,942 | PR_kwDODunzps6WvRqi | 7,571 | fix string_to_dict test | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7571). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-05-19T14:49:23 | 2025-05-19T14:52:24 | 2025-05-19T14:49:28 | MEMBER | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/7571.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7571",
"merged_at": "2025-05-19T14:49:28Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7571.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7571"
} | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7571/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7571/timeline | null | null | null |
https://api.github.com/repos/huggingface/datasets/issues/7570 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7570/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7570/comments | https://api.github.com/repos/huggingface/datasets/issues/7570/events | https://github.com/huggingface/datasets/issues/7570 | 3,065,966,529 | I_kwDODunzps62vu_B | 7,570 | Dataset lib seems to broke after fssec lib update | {
"avatar_url": "https://avatars.githubusercontent.com/u/81933585?v=4",
"events_url": "https://api.github.com/users/sleepingcat4/events{/privacy}",
"followers_url": "https://api.github.com/users/sleepingcat4/followers",
"following_url": "https://api.github.com/users/sleepingcat4/following{/other_user}",
"gists_url": "https://api.github.com/users/sleepingcat4/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sleepingcat4",
"id": 81933585,
"login": "sleepingcat4",
"node_id": "MDQ6VXNlcjgxOTMzNTg1",
"organizations_url": "https://api.github.com/users/sleepingcat4/orgs",
"received_events_url": "https://api.github.com/users/sleepingcat4/received_events",
"repos_url": "https://api.github.com/users/sleepingcat4/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sleepingcat4/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sleepingcat4/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sleepingcat4",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"Hi, can you try updating `datasets` ? Colab still installs `datasets` 2.x by default, instead of 3.x\n\nIt would be cool to also report this to google colab, they have a GitHub repo for this IIRC",
"@lhoestq I have updated it to `datasets==3.6.0` and now there's an entirely different issue on colab while locally its fine. \n\n```\n/usr/local/lib/python3.11/dist-packages/huggingface_hub/utils/_auth.py:94: UserWarning: \nThe secret `HF_TOKEN` does not exist in your Colab secrets.\nTo authenticate with the Hugging Face Hub, create a token in your settings tab (https://huggingface.co/settings/tokens), set it as secret in your Google Colab and restart your session.\nYou will be able to reuse this secret in all of your notebooks.\nPlease note that authentication is recommended but still optional to access public models or datasets.\n warnings.warn(\nREADME.md: 100%\n 2.88k/2.88k [00:00<00:00, 166kB/s]\nsuno.jsonl.zst: 100%\n 221M/221M [00:05<00:00, 48.6MB/s]\nGenerating train split: \n 18633/0 [00:01<00:00, 13018.92 examples/s]\n---------------------------------------------------------------------------\nTypeError Traceback (most recent call last)\n[/usr/local/lib/python3.11/dist-packages/datasets/builder.py](https://localhost:8080/#) in _prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, job_id)\n 1870 try:\n-> 1871 writer.write_table(table)\n 1872 except CastError as cast_error:\n\n17 frames\nTypeError: Couldn't cast array of type\nstruct<id: string, type: string, infill: bool, source: string, continue_at: double, infill_dur_s: double, infill_end_s: double, infill_start_s: double, include_future_s: double, include_history_s: double, infill_context_end_s: double, infill_context_start_s: int64>\nto\n{'id': Value(dtype='string', id=None), 'type': Value(dtype='string', id=None), 'infill': Value(dtype='bool', id=None), 'source': Value(dtype='string', id=None), 'continue_at': Value(dtype='float64', id=None), 'include_history_s': Value(dtype='float64', id=None)}\n\nThe above exception was the direct cause of the following exception:\n\nDatasetGenerationError Traceback (most recent call last)\n[/usr/local/lib/python3.11/dist-packages/datasets/builder.py](https://localhost:8080/#) in _prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, job_id)\n 1896 if isinstance(e, DatasetGenerationError):\n 1897 raise\n-> 1898 raise DatasetGenerationError(\"An error occurred while generating the dataset\") from e\n 1899 \n 1900 yield job_id, True, (total_num_examples, total_num_bytes, writer._features, num_shards, shard_lengths)\n\nDatasetGenerationError: An error occurred while generating the dataset\n```",
"@lhoestq opps sorry the dataset was in .zst which was causing this error rather than being a datasets library fault. After upgrading dataset version Colab is working fine. "
] | 2025-05-15T11:45:06 | 2025-06-13T00:44:27 | 2025-06-13T00:44:27 | NONE | null | null | null | null | ### Describe the bug
I am facing an issue since today where HF's dataset is acting weird and in some instances failure to recognise a valid dataset entirely, I think it is happening due to recent change in `fsspec` lib as using this command fixed it for me in one-time: `!pip install -U datasets huggingface_hub fsspec`
### Steps to reproduce the bug
from datasets import load_dataset
def download_hf():
dataset_name = input("Enter the dataset name: ")
subset_name = input("Enter subset name: ")
ds = load_dataset(dataset_name, name=subset_name)
for split in ds:
ds[split].to_pandas().to_csv(f"{subset_name}.csv", index=False)
download_hf()
### Expected behavior
```
Downloading readme: 100%
1.55k/1.55k [00:00<00:00, 121kB/s]
Downloading data files: 100%
1/1 [00:00<00:00, 2.06it/s]
Downloading data: 0%| | 0.00/54.2k [00:00<?, ?B/s]
Downloading data: 100%|██████████| 54.2k/54.2k [00:00<00:00, 121kB/s]
Extracting data files: 100%
1/1 [00:00<00:00, 35.17it/s]
Generating test split:
140/0 [00:00<00:00, 2628.62 examples/s]
---------------------------------------------------------------------------
NotImplementedError Traceback (most recent call last)
[<ipython-input-2-12ab305b0e77>](https://localhost:8080/#) in <cell line: 0>()
8 ds[split].to_pandas().to_csv(f"{subset_name}.csv", index=False)
9
---> 10 download_hf()
2 frames
[/usr/local/lib/python3.11/dist-packages/datasets/builder.py](https://localhost:8080/#) in as_dataset(self, split, run_post_process, verification_mode, ignore_verifications, in_memory)
1171 is_local = not is_remote_filesystem(self._fs)
1172 if not is_local:
-> 1173 raise NotImplementedError(f"Loading a dataset cached in a {type(self._fs).__name__} is not supported.")
1174 if not os.path.exists(self._output_dir):
1175 raise FileNotFoundError(
NotImplementedError: Loading a dataset cached in a LocalFileSystem is not supported.
```
OR
```
Traceback (most recent call last):
File "e:\Fuck\download-data\mcq_dataset.py", line 10, in <module>
download_hf()
File "e:\Fuck\download-data\mcq_dataset.py", line 6, in download_hf
ds = load_dataset(dataset_name, name=subset_name)
File "C:\Users\DELL\AppData\Local\Programs\Python\Python310\lib\site-packages\datasets\load.py", line 2606, in load_dataset
builder_instance = load_dataset_builder(
File "C:\Users\DELL\AppData\Local\Programs\Python\Python310\lib\site-packages\datasets\load.py", line 2277, in load_dataset_builder
dataset_module = dataset_module_factory(
File "C:\Users\DELL\AppData\Local\Programs\Python\Python310\lib\site-packages\datasets\load.py", line 1917, in dataset_module_factory
raise e1 from None
File "C:\Users\DELL\AppData\Local\Programs\Python\Python310\lib\site-packages\datasets\load.py", line 1867, in dataset_module_factory
raise DatasetNotFoundError(f"Dataset '{path}' doesn't exist on the Hub or cannot be accessed.") from e
datasets.exceptions.DatasetNotFoundError: Dataset 'dataset repo_id' doesn't exist on the Hub or cannot be accessed.
```
### Environment info
colab and 3.10 local system | {
"avatar_url": "https://avatars.githubusercontent.com/u/81933585?v=4",
"events_url": "https://api.github.com/users/sleepingcat4/events{/privacy}",
"followers_url": "https://api.github.com/users/sleepingcat4/followers",
"following_url": "https://api.github.com/users/sleepingcat4/following{/other_user}",
"gists_url": "https://api.github.com/users/sleepingcat4/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sleepingcat4",
"id": 81933585,
"login": "sleepingcat4",
"node_id": "MDQ6VXNlcjgxOTMzNTg1",
"organizations_url": "https://api.github.com/users/sleepingcat4/orgs",
"received_events_url": "https://api.github.com/users/sleepingcat4/received_events",
"repos_url": "https://api.github.com/users/sleepingcat4/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sleepingcat4/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sleepingcat4/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sleepingcat4",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7570/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7570/timeline | null | completed | {
"completed": 0,
"percent_completed": 0,
"total": 0
} |
https://api.github.com/repos/huggingface/datasets/issues/7569 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7569/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7569/comments | https://api.github.com/repos/huggingface/datasets/issues/7569/events | https://github.com/huggingface/datasets/issues/7569 | 3,061,234,054 | I_kwDODunzps62drmG | 7,569 | Dataset creation is broken if nesting a dict inside a dict inside a list | {
"avatar_url": "https://avatars.githubusercontent.com/u/25732590?v=4",
"events_url": "https://api.github.com/users/TimSchneider42/events{/privacy}",
"followers_url": "https://api.github.com/users/TimSchneider42/followers",
"following_url": "https://api.github.com/users/TimSchneider42/following{/other_user}",
"gists_url": "https://api.github.com/users/TimSchneider42/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/TimSchneider42",
"id": 25732590,
"login": "TimSchneider42",
"node_id": "MDQ6VXNlcjI1NzMyNTkw",
"organizations_url": "https://api.github.com/users/TimSchneider42/orgs",
"received_events_url": "https://api.github.com/users/TimSchneider42/received_events",
"repos_url": "https://api.github.com/users/TimSchneider42/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/TimSchneider42/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TimSchneider42/subscriptions",
"type": "User",
"url": "https://api.github.com/users/TimSchneider42",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [
"Hi ! That's because Séquence is a type that comes from tensorflow datasets and inverts lists and focus when doing Séquence(dict).\n\nInstead you should use a list. In your case\n```python\nfeatures = Features({\n \"a\": [{\"b\": {\"c\": Value(\"string\")}}]\n})\n```",
"Hi,\n\nThanks for the swift reply! Could you quickly clarify a couple of points?\n\n1. Is there any benefit in using Sequence over normal lists? Especially for longer lists (in my case, up to 256 entries)\n2. When exactly can I use Sequence? If there is a maximum of one level of dictionaries inside, then it's always fine?\n3. When creating the data in the generator, do I need to swap lists and dicts manually, or does that happen automatically?\n\nAlso, the documentation does not seem to mention this limitation of the Sequence type anywhere and encourages users to use it [here](https://huggingface.co/docs/datasets/en/about_dataset_features). In fact, I did not even know that just using a Python list was an option. Maybe the documentation can be improved to mention the limitations of Sequence and highlight that lists can be used instead.\n\nThanks a lot in advance!\n\nBest,\nTim"
] | 2025-05-13T21:06:45 | 2025-05-20T19:25:15 | null | NONE | null | null | null | null | ### Describe the bug
Hey,
I noticed that the creation of datasets with `Dataset.from_generator` is broken if dicts and lists are nested in a certain way and a schema is being passed. See below for details.
Best,
Tim
### Steps to reproduce the bug
Runing this code:
```python
from datasets import Dataset, Features, Sequence, Value
def generator():
yield {
"a": [{"b": {"c": 0}}],
}
features = Features(
{
"a": Sequence(
feature={
"b": {
"c": Value("int32"),
},
},
length=1,
)
}
)
dataset = Dataset.from_generator(generator, features=features)
```
leads to
```
Generating train split: 1 examples [00:00, 540.85 examples/s]
Traceback (most recent call last):
File "/home/user/miniconda3/envs/test/lib/python3.11/site-packages/datasets/builder.py", line 1635, in _prepare_split_single
num_examples, num_bytes = writer.finalize()
^^^^^^^^^^^^^^^^^
File "/home/user/miniconda3/envs/test/lib/python3.11/site-packages/datasets/arrow_writer.py", line 657, in finalize
self.write_examples_on_file()
File "/home/user/miniconda3/envs/test/lib/python3.11/site-packages/datasets/arrow_writer.py", line 510, in write_examples_on_file
self.write_batch(batch_examples=batch_examples)
File "/home/user/miniconda3/envs/test/lib/python3.11/site-packages/datasets/arrow_writer.py", line 629, in write_batch
pa_table = pa.Table.from_arrays(arrays, schema=schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "pyarrow/table.pxi", line 4851, in pyarrow.lib.Table.from_arrays
File "pyarrow/table.pxi", line 1608, in pyarrow.lib._sanitize_arrays
File "pyarrow/array.pxi", line 399, in pyarrow.lib.asarray
File "pyarrow/array.pxi", line 1004, in pyarrow.lib.Array.cast
File "/home/user/miniconda3/envs/test/lib/python3.11/site-packages/pyarrow/compute.py", line 405, in cast
return call_function("cast", [arr], options, memory_pool)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "pyarrow/_compute.pyx", line 598, in pyarrow._compute.call_function
File "pyarrow/_compute.pyx", line 393, in pyarrow._compute.Function.call
File "pyarrow/error.pxi", line 155, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 92, in pyarrow.lib.check_status
pyarrow.lib.ArrowNotImplementedError: Unsupported cast from fixed_size_list<item: struct<c: int32>>[1] to struct using function cast_struct
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/user/test/tools/hf_test2.py", line 23, in <module>
dataset = Dataset.from_generator(generator, features=features)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/miniconda3/envs/test/lib/python3.11/site-packages/datasets/arrow_dataset.py", line 1114, in from_generator
).read()
^^^^^^
File "/home/user/miniconda3/envs/test/lib/python3.11/site-packages/datasets/io/generator.py", line 49, in read
self.builder.download_and_prepare(
File "/home/user/miniconda3/envs/test/lib/python3.11/site-packages/datasets/builder.py", line 925, in download_and_prepare
self._download_and_prepare(
File "/home/user/miniconda3/envs/test/lib/python3.11/site-packages/datasets/builder.py", line 1649, in _download_and_prepare
super()._download_and_prepare(
File "/home/user/miniconda3/envs/test/lib/python3.11/site-packages/datasets/builder.py", line 1001, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/home/user/miniconda3/envs/test/lib/python3.11/site-packages/datasets/builder.py", line 1487, in _prepare_split
for job_id, done, content in self._prepare_split_single(
File "/home/user/miniconda3/envs/test/lib/python3.11/site-packages/datasets/builder.py", line 1644, in _prepare_split_single
raise DatasetGenerationError("An error occurred while generating the dataset") from e
datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset
Process finished with exit code 1
```
### Expected behavior
I expected this code not to lead to an error.
I have done some digging and figured out that the problem seems to be the `get_nested_type` function in `features.py`, which, for whatever reason, flips Sequences and dicts whenever it encounters a dict inside of a sequence. This seems to be necessary, as disabling that flip leads to another error. However, by keeping that flip enabled for the highest level and disabling it for all subsequent levels, I was able to work around this problem. Specifically, by patching `get_nested_type` as follows, it works on the given example (emphasis on the `level` parameter I added):
```python
def get_nested_type(schema: FeatureType, level=0) -> pa.DataType:
"""
get_nested_type() converts a datasets.FeatureType into a pyarrow.DataType, and acts as the inverse of
generate_from_arrow_type().
It performs double-duty as the implementation of Features.type and handles the conversion of
datasets.Feature->pa.struct
"""
# Nested structures: we allow dict, list/tuples, sequences
if isinstance(schema, Features):
return pa.struct(
{key: get_nested_type(schema[key], level = level + 1) for key in schema}
) # Features is subclass of dict, and dict order is deterministic since Python 3.6
elif isinstance(schema, dict):
return pa.struct(
{key: get_nested_type(schema[key], level = level + 1) for key in schema}
) # however don't sort on struct types since the order matters
elif isinstance(schema, (list, tuple)):
if len(schema) != 1:
raise ValueError("When defining list feature, you should just provide one example of the inner type")
value_type = get_nested_type(schema[0], level = level + 1)
return pa.list_(value_type)
elif isinstance(schema, LargeList):
value_type = get_nested_type(schema.feature, level = level + 1)
return pa.large_list(value_type)
elif isinstance(schema, Sequence):
value_type = get_nested_type(schema.feature, level = level + 1)
# We allow to reverse list of dict => dict of list for compatibility with tfds
if isinstance(schema.feature, dict) and level == 1:
data_type = pa.struct({f.name: pa.list_(f.type, schema.length) for f in value_type})
else:
data_type = pa.list_(value_type, schema.length)
return data_type
# Other objects are callable which returns their data type (ClassLabel, Array2D, Translation, Arrow datatype creation methods)
return schema()
```
I have honestly no idea what I am doing here, so this might produce other issues for different inputs.
### Environment info
- `datasets` version: 3.6.0
- Platform: Linux-6.8.0-59-generic-x86_64-with-glibc2.35
- Python version: 3.11.11
- `huggingface_hub` version: 0.30.2
- PyArrow version: 19.0.1
- Pandas version: 2.2.3
- `fsspec` version: 2024.12.0
Also tested it with 3.5.0, same result. | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7569/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7569/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} |
HuggingFace Datasets Repository Issues
Dataset Description
This dataset contains issues and pull requests from the huggingface/datasets repository, collected via the GitHub API. Each entry includes comprehensive metadata about the issue/PR along with all associated comments, making it valuable for studying software development patterns, issue resolution processes, and community interactions in open-source projects.
Dataset Summary
- Repository: huggingface/datasets
- Total Issues/PRs: 7,540
- Date Collected: June 13, 2025
- Language: English
- License: Apache 2.0
The dataset includes both open and closed issues/pull requests with their complete comment threads, providing rich context for understanding how software issues are discussed and resolved in a major open-source machine learning library.
Supported Tasks and Leaderboards
This dataset can be used for various NLP and software engineering research tasks:
- Text Classification: Categorizing issues by type (bug, feature request, question, etc.)
- Sentiment Analysis: Analyzing the tone of issue discussions
- Text Generation: Generating responses to software issues
- Question Answering: Extracting answers from issue discussions
- Software Engineering Research: Studying issue resolution patterns, community interactions, and development workflows
Languages
The dataset is primarily in English, as it contains issues from an English-speaking open-source community.
Dataset Structure
Data Instances
Each instance represents a single GitHub issue or pull request with the following structure:
{
"number": 7613,
"title": "fix parallel push_to_hub in dataset_dict",
"body": "Description of the issue...",
"state": "open",
"user": {
"login": "username",
"id": 12345,
...
},
"labels": [
{
"name": "bug",
"color": "d73a4a",
...
}
],
"comments": [
"First comment text...",
"Second comment text...",
...
],
"created_at": "2025-06-13T09:02:24Z",
"updated_at": "2025-06-13T10:38:04Z",
"pull_request": {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7613",
...
},
...
}
Data Fields
- number (int64): Issue/PR number
- title (string): Title of the issue/PR
- body (string): Main description/content
- state (string): Current state (open/closed)
- user (struct): Information about the user who created the issue
- labels (list): Labels assigned to the issue
- comments (sequence): List of all comment texts
- created_at (timestamp): Creation timestamp
- updated_at (timestamp): Last update timestamp
- closed_at (timestamp): Closure timestamp (if closed)
- pull_request (struct): PR-specific metadata (if applicable)
- assignee/assignees (struct/list): Assigned users
- milestone (struct): Associated milestone information
- reactions (struct): Reaction counts (+1, -1, etc.)
- author_association (string): Relationship to repository (OWNER, CONTRIBUTOR, etc.)
Data Splits
The dataset contains a single split:
- train: 7,540 issues/pull requests
Dataset Creation
Curation Rationale
This dataset was created to provide researchers and developers with real-world examples of software issue discussions and resolutions from a popular machine learning library. It can help understand:
- How technical issues are communicated and resolved
- Patterns in community interaction and support
- Evolution of software projects through issue tracking
- Natural language patterns in technical documentation
Source Data
Initial Data Collection and Normalization
The data was collected using the GitHub API v4, specifically targeting the huggingface/datasets
repository. The collection process:
- Issues Retrieval: All issues and pull requests were fetched using paginated API calls
- Comments Collection: For each issue/PR, all associated comments were retrieved
- Data Processing: The raw JSON responses were processed and structured into a consistent format
- Timestamp Handling: All timestamps were normalized to UTC format
Who are the source language producers?
The language producers are contributors to the HuggingFace datasets library, including:
- HuggingFace team members and maintainers
- Open-source contributors from the global developer community
- Users reporting bugs and requesting features
- Community members providing support and discussions
Annotations
Annotation process
No additional annotations were added beyond the existing GitHub metadata (labels, assignees, milestones, etc.) that were already present in the repository.
Who are the annotators?
The repository maintainers and contributors who applied labels and other metadata during normal issue management processes.
Personal and Sensitive Information
The dataset contains publicly available information from GitHub issues. While no intentionally sensitive information should be present, users should be aware that:
- GitHub usernames and profile information are included
- Some issues might contain system information, file paths, or configuration details
- Email addresses might appear in code snippets or error messages
Considerations for Using the Data
Social Impact of Dataset
This dataset can contribute positively to software engineering research and education by providing insights into collaborative development processes. However, users should consider:
- Privacy: Respect the public nature of the data and avoid any analysis that could harm individual contributors
- Context: Issues represent specific technical problems and may not generalize to all software projects
- Bias: The dataset reflects the specific community and practices of the HuggingFace ecosystem
Discussion of Biases
Potential biases in the dataset include:
- Language Bias: Primarily English-language content
- Domain Bias: Focused on machine learning/data science library issues
- Community Bias: Reflects the practices and communication style of the HuggingFace community
- Temporal Bias: Represents issues from a specific time period in the project's evolution
- Technical Bias: May over-represent certain types of technical issues common in ML libraries
Other Known Limitations
- The dataset represents a snapshot from June 13, 2025, and doesn't include subsequent updates
- Comment threads are included as lists but don't preserve detailed threading structure
- Some metadata fields may be incomplete for older issues
- The dataset doesn't include private repository discussions or communications
Additional Information
Dataset Curators
This dataset was curated by Hélder Monteiro by extracting and processing public information from the HuggingFace datasets repository using the GitHub API.
Licensing Information
This dataset is licensed under the Apache 2.0 License, consistent with the open-source nature of the original repository.
Citation Information
@dataset{monteiro_huggingface_datasets_issues_2025,
title={HuggingFace Datasets Repository Issues},
author={Hélder Monteiro},
year={2025},
month={June},
url={https://huggingface.co/datasets/helmo/github-issues},
note={Issues and pull requests from huggingface/datasets repository collected and curated via GitHub API}
}
Disclaimer
This dataset contains content created by the HuggingFace community members who opened issues, submitted pull requests, and participated in discussions on the datasets repository. While this dataset compilation is provided as it is, all original content belongs to the respective contributors.
For questions about this dataset or to report issues, please open an issue in the dataset repository or contact the dataset maintainers.
- Downloads last month
- 111