Dataset Viewer
url
stringlengths 58
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 72
75
| comments_url
stringlengths 67
70
| events_url
stringlengths 65
68
| html_url
stringlengths 46
51
| id
int64 599M
2.6B
| node_id
stringlengths 18
32
| number
int64 1
7.24k
| title
stringlengths 1
290
| user
dict | labels
listlengths 0
4
| state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
listlengths 0
4
| milestone
dict | comments
int64 0
70
| created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| closed_at
stringlengths 20
20
⌀ | author_association
stringclasses 4
values | active_lock_reason
null | draft
bool 2
classes | pull_request
dict | body
stringlengths 0
228k
⌀ | closed_by
dict | reactions
dict | timeline_url
stringlengths 67
70
| performed_via_github_app
null | state_reason
stringclasses 3
values | is_pull_request
bool 2
classes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/7236 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7236/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7236/comments | https://api.github.com/repos/huggingface/datasets/issues/7236/events | https://github.com/huggingface/datasets/pull/7236 | 2,597,358,525 | PR_kwDODunzps5_GIvw | 7,236 | [MINOR:TYPO] Update arrow_dataset.py | {
"avatar_url": "https://avatars.githubusercontent.com/u/3664563?v=4",
"events_url": "https://api.github.com/users/cakiki/events{/privacy}",
"followers_url": "https://api.github.com/users/cakiki/followers",
"following_url": "https://api.github.com/users/cakiki/following{/other_user}",
"gists_url": "https://api.github.com/users/cakiki/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/cakiki",
"id": 3664563,
"login": "cakiki",
"node_id": "MDQ6VXNlcjM2NjQ1NjM=",
"organizations_url": "https://api.github.com/users/cakiki/orgs",
"received_events_url": "https://api.github.com/users/cakiki/received_events",
"repos_url": "https://api.github.com/users/cakiki/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/cakiki/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cakiki/subscriptions",
"type": "User",
"url": "https://api.github.com/users/cakiki",
"user_view_type": "public"
} | [] | open | false | null | [] | null | 0 | 2024-10-18T12:10:03Z | 2024-10-18T12:10:03Z | null | CONTRIBUTOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/7236.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7236",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/7236.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7236"
} | Fix wrong link.
csv kwargs docstring link was pointing to pandas json docs. | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7236/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7236/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/7234 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7234/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7234/comments | https://api.github.com/repos/huggingface/datasets/issues/7234/events | https://github.com/huggingface/datasets/pull/7234 | 2,594,220,624 | PR_kwDODunzps5-8Z_- | 7,234 | No need for dataset_info | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | [] | open | false | null | [] | null | 1 | 2024-10-17T09:54:03Z | 2024-10-18T14:10:12Z | null | MEMBER | null | true | {
"diff_url": "https://github.com/huggingface/datasets/pull/7234.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7234",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/7234.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7234"
} | save a useless call to /api/datasets/repo_id | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7234/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7234/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/7233 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7233/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7233/comments | https://api.github.com/repos/huggingface/datasets/issues/7233/events | https://github.com/huggingface/datasets/issues/7233 | 2,593,903,113 | I_kwDODunzps6am9IJ | 7,233 | 数据集数量问题 | {
"avatar_url": "https://avatars.githubusercontent.com/u/180297268?v=4",
"events_url": "https://api.github.com/users/want-well/events{/privacy}",
"followers_url": "https://api.github.com/users/want-well/followers",
"following_url": "https://api.github.com/users/want-well/following{/other_user}",
"gists_url": "https://api.github.com/users/want-well/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/want-well",
"id": 180297268,
"login": "want-well",
"node_id": "U_kgDOCr8eNA",
"organizations_url": "https://api.github.com/users/want-well/orgs",
"received_events_url": "https://api.github.com/users/want-well/received_events",
"repos_url": "https://api.github.com/users/want-well/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/want-well/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/want-well/subscriptions",
"type": "User",
"url": "https://api.github.com/users/want-well",
"user_view_type": "public"
} | [] | open | false | null | [] | null | 0 | 2024-10-17T07:41:44Z | 2024-10-17T07:41:44Z | null | NONE | null | null | null | ### Describe the bug
这里我进行大模型微调,当数据集数量为718时,模型可以正常微调,但是当我添加一个在前718个数据集中的数据或者新增一个数据就会报错
### Steps to reproduce the bug
1.
这里我的数据集可以微调的最后两个数据集是:
{
"messages": [
{
"role": "user",
"content": "完成校正装置设计后需要进行哪些工作?"
},
{
"role": "assistant",
"content": "一旦完成校正装置设计后,需要进行系统实际调校工作,或者进行MATLAB仿真以检查系统的时间响应特性。这时,需将系统建模时省略的部分尽可能加入系统,以保证仿真结果的逼真度。如果由于系统各种固有非线性因素影响,或者由于系统噪声和负载效应等因素的影响,使已校正系统不能满足全部性能指标要求,则需要适当调整校正装置的形式或参数,直到已校正系统满足全部性能指标为止。"
}
]
}
{
"messages": [
{
"role": "user",
"content": "为什么闭环带宽要求会影响串联超前校正的效果?"
},
{
"role": "assistant",
"content": "如果待校正系统不稳定,为了满足相角裕度的要求,超前网络需要提供很大的相角超前量。这会导致超前网络的a值必须选得很大,从而使已校正系统的带宽过大。带宽过大会使通过系统的高频噪声电平变高,可能导致系统失控。因此,闭环带宽的要求会影响串联超前校正的效果。"
}
]
}
2.
当我把倒数第二个数据集再次添加到数据集中时,此时我的最后三个数据集是
{
"messages": [
{
"role": "user",
"content": "完成校正装置设计后需要进行哪些工作?"
},
{
"role": "assistant",
"content": "一旦完成校正装置设计后,需要进行系统实际调校工作,或者进行MATLAB仿真以检查系统的时间响应特性。这时,需将系统建模时省略的部分尽可能加入系统,以保证仿真结果的逼真度。如果由于系统各种固有非线性因素影响,或者由于系统噪声和负载效应等因素的影响,使已校正系统不能满足全部性能指标要求,则需要适当调整校正装置的形式或参数,直到已校正系统满足全部性能指标为止。"
}
]
}
{
"messages": [
{
"role": "user",
"content": "为什么闭环带宽要求会影响串联超前校正的效果?"
},
{
"role": "assistant",
"content": "如果待校正系统不稳定,为了满足相角裕度的要求,超前网络需要提供很大的相角超前量。这会导致超前网络的a值必须选得很大,从而使已校正系统的带宽过大。带宽过大会使通过系统的高频噪声电平变高,可能导致系统失控。因此,闭环带宽的要求会影响串联超前校正的效果。"
}
]
}
{
"messages": [
{
"role": "user",
"content": "完成校正装置设计后需要进行哪些工作?"
},
{
"role": "assistant",
"content": "一旦完成校正装置设计后,需要进行系统实际调校工作,或者进行MATLAB仿真以检查系统的时间响应特性。这时,需将系统建模时省略的部分尽可能加入系统,以保证仿真结果的逼真度。如果由于系统各种固有非线性因素影响,或者由于系统噪声和负载效应等因素的影响,使已校正系统不能满足全部性能指标要求,则需要适当调整校正装置的形式或参数,直到已校正系统满足全部性能指标为止。"
}
]
}
这时系统会显示bug:
root@autodl-container-027f4cad3d-6baf4e64:~/autodl-tmp# python GLM-4/finetune_demo/finetune.py datasets/ ZhipuAI/glm-4-9b-chat GLM-4/finetune_demo/configs/lora.yaml
Loading checkpoint shards: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 10/10 [00:02<00:00, 4.04it/s]
The installed version of bitsandbytes was compiled without GPU support. 8-bit optimizers, 8-bit multiplication, and GPU quantization are unavailable.
trainable params: 2,785,280 || all params: 9,402,736,640 || trainable%: 0.0296
Generating train split: 0 examples [00:00, ? examples/s]Failed to load JSON from file '/root/autodl-tmp/datasets/train.jsonl' with error <class 'pyarrow.lib.ArrowInvalid'>: JSON parse error: Missing a name for object member. in row 718
Generating train split: 0 examples [00:00, ? examples/s]
╭──────────────────────────────────────────────────────────────────────────────────────────────────────── Traceback (most recent call last) ─────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│ /root/miniconda3/lib/python3.10/site-packages/datasets/packaged_modules/json/json.py:153 in _generate_tables │
│ │
│ 150 │ │ │ │ │ │ │ │ with open( │
│ 151 │ │ │ │ │ │ │ │ │ file, encoding=self.config.encoding, errors=self.con │
│ 152 │ │ │ │ │ │ │ │ ) as f: │
│ ❱ 153 │ │ │ │ │ │ │ │ │ df = pd.read_json(f, dtype_backend="pyarrow") │
│ 154 │ │ │ │ │ │ │ except ValueError: │
│ 155 │ │ │ │ │ │ │ │ logger.error(f"Failed to load JSON from file '{file}' wi │
│ 156 │ │ │ │ │ │ │ │ raise e │
│ │
│ /root/miniconda3/lib/python3.10/site-packages/pandas/io/json/_json.py:815 in read_json │
│ │
│ 812 │ if chunksize: │
│ 813 │ │ return json_reader │
│ 814 │ else: │
│ ❱ 815 │ │ return json_reader.read() │
│ 816 │
│ 817 │
│ 818 class JsonReader(abc.Iterator, Generic[FrameSeriesStrT]): │
│ │
│ /root/miniconda3/lib/python3.10/site-packages/pandas/io/json/_json.py:1025 in read │
│ │
│ 1022 │ │ │ │ │ │ data_lines = data.split("\n") │
│ 1023 │ │ │ │ │ │ obj = self._get_object_parser(self._combine_lines(data_lines)) │
│ 1024 │ │ │ │ else: │
│ ❱ 1025 │ │ │ │ │ obj = self._get_object_parser(self.data) │
│ 1026 │ │ │ │ if self.dtype_backend is not lib.no_default: │
│ 1027 │ │ │ │ │ return obj.convert_dtypes( │
│ 1028 │ │ │ │ │ │ infer_objects=False, dtype_backend=self.dtype_backend │
│ │
│ /root/miniconda3/lib/python3.10/site-packages/pandas/io/json/_json.py:1051 in _get_object_parser │
│ │
│ 1048 │ │ } │
│ 1049 │ │ obj = None │
│ 1050 │ │ if typ == "frame": │
│ ❱ 1051 │ │ │ obj = FrameParser(json, **kwargs).parse() │
│ 1052 │ │ │
│ 1053 │ │ if typ == "series" or obj is None: │
│ 1054 │ │ │ if not isinstance(dtype, bool): │
│ │
│ /root/miniconda3/lib/python3.10/site-packages/pandas/io/json/_json.py:1187 in parse │
│ │
│ 1184 │ │
│ 1185 │ @final │
│ 1186 │ def parse(self): │
│ ❱ 1187 │ │ self._parse() │
│ 1188 │ │ │
│ 1189 │ │ if self.obj is None: │
│ 1190 │ │ │ return None │
│ │
│ /root/miniconda3/lib/python3.10/site-packages/pandas/io/json/_json.py:1403 in _parse │
│ │
│ 1400 │ │ │
│ 1401 │ │ if orient == "columns": │
│ 1402 │ │ │ self.obj = DataFrame( │
│ ❱ 1403 │ │ │ │ ujson_loads(json, precise_float=self.precise_float), dtype=None │
│ 1404 │ │ │ ) │
│ 1405 │ │ elif orient == "split": │
│ 1406 │ │ │ decoded = { │
╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
ValueError: Trailing data
During handling of the above exception, another exception occurred:
╭──────────────────────────────────────────────────────────────────────────────────────────────────────── Traceback (most recent call last) ─────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│ /root/miniconda3/lib/python3.10/site-packages/datasets/builder.py:1997 in _prepare_split_single │
│ │
│ 1994 │ │ │ ) │
│ 1995 │ │ │ try: │
│ 1996 │ │ │ │ _time = time.time() │
│ ❱ 1997 │ │ │ │ for _, table in generator: │
│ 1998 │ │ │ │ │ if max_shard_size is not None and writer._num_bytes > max_shard_size │
│ 1999 │ │ │ │ │ │ num_examples, num_bytes = writer.finalize() │
│ 2000 │ │ │ │ │ │ writer.close() │
│ │
│ /root/miniconda3/lib/python3.10/site-packages/datasets/packaged_modules/json/json.py:156 in _generate_tables │
│ │
│ 153 │ │ │ │ │ │ │ │ │ df = pd.read_json(f, dtype_backend="pyarrow") │
│ 154 │ │ │ │ │ │ │ except ValueError: │
│ 155 │ │ │ │ │ │ │ │ logger.error(f"Failed to load JSON from file '{file}' wi │
│ ❱ 156 │ │ │ │ │ │ │ │ raise e │
│ 157 │ │ │ │ │ │ │ if df.columns.tolist() == [0]: │
│ 158 │ │ │ │ │ │ │ │ df.columns = list(self.config.features) if self.config.f │
│ 159 │ │ │ │ │ │ │ try: │
│ │
│ /root/miniconda3/lib/python3.10/site-packages/datasets/packaged_modules/json/json.py:130 in _generate_tables │
│ │
│ 127 │ │ │ │ │ │ try: │
│ 128 │ │ │ │ │ │ │ while True: │
│ 129 │ │ │ │ │ │ │ │ try: │
│ ❱ 130 │ │ │ │ │ │ │ │ │ pa_table = paj.read_json( │
│ 131 │ │ │ │ │ │ │ │ │ │ io.BytesIO(batch), read_options=paj.ReadOptions( │
│ 132 │ │ │ │ │ │ │ │ │ ) │
│ 133 │ │ │ │ │ │ │ │ │ break │
│ │
│ in pyarrow._json.read_json:308 │
│ │
│ in pyarrow.lib.pyarrow_internal_check_status:154 │
│ │
│ in pyarrow.lib.check_status:91 │
╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
ArrowInvalid: JSON parse error: Missing a name for object member. in row 718
The above exception was the direct cause of the following exception:
╭──────────────────────────────────────────────────────────────────────────────────────────────────────── Traceback (most recent call last) ─────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│ /root/autodl-tmp/GLM-4/finetune_demo/finetune.py:406 in main │
│ │
│ 403 ): │
│ 404 │ ft_config = FinetuningConfig.from_file(config_file) │
│ 405 │ tokenizer, model = load_tokenizer_and_model(model_dir, peft_config=ft_config.peft_co │
│ ❱ 406 │ data_manager = DataManager(data_dir, ft_config.data_config) │
│ 407 │ │
│ 408 │ train_dataset = data_manager.get_dataset( │
│ 409 │ │ Split.TRAIN, │
│ │
│ /root/autodl-tmp/GLM-4/finetune_demo/finetune.py:204 in __init__ │
│ │
│ 201 │ def __init__(self, data_dir: str, data_config: DataConfig): │
│ 202 │ │ self._num_proc = data_config.num_proc │
│ 203 │ │ │
│ ❱ 204 │ │ self._dataset_dct = _load_datasets( │
│ 205 │ │ │ data_dir, │
│ 206 │ │ │ data_config.data_format, │
│ 207 │ │ │ data_config.data_files, │
│ │
│ /root/autodl-tmp/GLM-4/finetune_demo/finetune.py:189 in _load_datasets │
│ │
│ 186 │ │ num_proc: Optional[int], │
│ 187 ) -> DatasetDict: │
│ 188 │ if data_format == '.jsonl': │
│ ❱ 189 │ │ dataset_dct = load_dataset( │
│ 190 │ │ │ data_dir, │
│ 191 │ │ │ data_files=data_files, │
│ 192 │ │ │ split=None, │
│ │
│ /root/miniconda3/lib/python3.10/site-packages/datasets/load.py:2616 in load_dataset │
│ │
│ 2613 │ │ return builder_instance.as_streaming_dataset(split=split) │
│ 2614 │ │
│ 2615 │ # Download and prepare data │
│ ❱ 2616 │ builder_instance.download_and_prepare( │
│ 2617 │ │ download_config=download_config, │
│ 2618 │ │ download_mode=download_mode, │
│ 2619 │ │ verification_mode=verification_mode, │
│ │
│ /root/miniconda3/lib/python3.10/site-packages/datasets/builder.py:1029 in download_and_prepare │
│ │
│ 1026 │ │ │ │ │ │ │ prepare_split_kwargs["max_shard_size"] = max_shard_size │
│ 1027 │ │ │ │ │ │ if num_proc is not None: │
│ 1028 │ │ │ │ │ │ │ prepare_split_kwargs["num_proc"] = num_proc │
│ ❱ 1029 │ │ │ │ │ │ self._download_and_prepare( │
│ 1030 │ │ │ │ │ │ │ dl_manager=dl_manager, │
│ 1031 │ │ │ │ │ │ │ verification_mode=verification_mode, │
│ 1032 │ │ │ │ │ │ │ **prepare_split_kwargs, │
│ │
│ /root/miniconda3/lib/python3.10/site-packages/datasets/builder.py:1124 in _download_and_prepare │
│ │
│ 1121 │ │ │ │
│ 1122 │ │ │ try: │
│ 1123 │ │ │ │ # Prepare split will record examples associated to the split │
│ ❱ 1124 │ │ │ │ self._prepare_split(split_generator, **prepare_split_kwargs) │
│ 1125 │ │ │ except OSError as e: │
│ 1126 │ │ │ │ raise OSError( │
│ 1127 │ │ │ │ │ "Cannot find data file. " │
│ │
│ /root/miniconda3/lib/python3.10/site-packages/datasets/builder.py:1884 in _prepare_split │
│ │
│ 1881 │ │ │ gen_kwargs = split_generator.gen_kwargs │
│ 1882 │ │ │ job_id = 0 │
│ 1883 │ │ │ with pbar: │
│ ❱ 1884 │ │ │ │ for job_id, done, content in self._prepare_split_single( │
│ 1885 │ │ │ │ │ gen_kwargs=gen_kwargs, job_id=job_id, **_prepare_split_args │
│ 1886 │ │ │ │ ): │
│ 1887 │ │ │ │ │ if done: │
│ │
│ /root/miniconda3/lib/python3.10/site-packages/datasets/builder.py:2040 in _prepare_split_single │
│ │
│ 2037 │ │ │ │ e = e.__context__ │
│ 2038 │ │ │ if isinstance(e, DatasetGenerationError): │
│ 2039 │ │ │ │ raise │
│ ❱ 2040 │ │ │ raise DatasetGenerationError("An error occurred while generating the dataset │
│ 2041 │ │ │
│ 2042 │ │ yield job_id, True, (total_num_examples, total_num_bytes, writer._features, num_ │
│ 2043 │
╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
DatasetGenerationError: An error occurred while generating the dataset
3.请问是否可以帮我解决
### Expected behavior
希望问题可以得到解决
### Environment info
Copy-and-paste the text below in your GitHub issue.
- `datasets` version: 2.20.0
- Platform: Linux-4.19.90-2107.6.0.0192.8.oe1.bclinux.x86_64-x86_64-with-glibc2.35
- Python version: 3.10.8
- `huggingface_hub` version: 0.24.6
- PyArrow version: 16.1.0
- Pandas version: 2.2.2
- `fsspec` version: 2023.12.2 | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7233/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7233/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7232 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7232/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7232/comments | https://api.github.com/repos/huggingface/datasets/issues/7232/events | https://github.com/huggingface/datasets/pull/7232 | 2,593,720,548 | PR_kwDODunzps5-62rY | 7,232 | (Super tiny doc update) Mention to_polars | {
"avatar_url": "https://avatars.githubusercontent.com/u/5236035?v=4",
"events_url": "https://api.github.com/users/fzyzcjy/events{/privacy}",
"followers_url": "https://api.github.com/users/fzyzcjy/followers",
"following_url": "https://api.github.com/users/fzyzcjy/following{/other_user}",
"gists_url": "https://api.github.com/users/fzyzcjy/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/fzyzcjy",
"id": 5236035,
"login": "fzyzcjy",
"node_id": "MDQ6VXNlcjUyMzYwMzU=",
"organizations_url": "https://api.github.com/users/fzyzcjy/orgs",
"received_events_url": "https://api.github.com/users/fzyzcjy/received_events",
"repos_url": "https://api.github.com/users/fzyzcjy/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/fzyzcjy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fzyzcjy/subscriptions",
"type": "User",
"url": "https://api.github.com/users/fzyzcjy",
"user_view_type": "public"
} | [] | open | false | null | [] | null | 0 | 2024-10-17T06:08:53Z | 2024-10-17T11:02:13Z | null | NONE | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/7232.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7232",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/7232.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7232"
} | polars is also quite popular now, thus this tiny update can tell users polars is supported | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7232/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7232/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/7231 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7231/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7231/comments | https://api.github.com/repos/huggingface/datasets/issues/7231/events | https://github.com/huggingface/datasets/pull/7231 | 2,592,011,737 | PR_kwDODunzps5-1jP8 | 7,231 | Fix typo in image dataset docs | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | 1 | 2024-10-16T14:05:46Z | 2024-10-16T17:06:21Z | 2024-10-16T17:06:19Z | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/7231.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7231",
"merged_at": "2024-10-16T17:06:19Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7231.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7231"
} | Fix typo in image dataset docs.
Typo reported by @datavistics. | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7231/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7231/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/7230 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7230/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7230/comments | https://api.github.com/repos/huggingface/datasets/issues/7230/events | https://github.com/huggingface/datasets/pull/7230 | 2,589,531,942 | PR_kwDODunzps5-ttUV | 7,230 | Video support | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | [] | open | false | null | [] | null | 1 | 2024-10-15T18:17:29Z | 2024-10-16T16:48:02Z | null | MEMBER | null | true | {
"diff_url": "https://github.com/huggingface/datasets/pull/7230.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7230",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/7230.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7230"
} | (wip and experimental)
adding the `Video` type based on `VideoReader` from `decord`
```python
>>>from datasets import load_dataset
>>> ds = load_dataset("path/to/videos", split="train").with_format("torch")
>>> print(ds[0]["video"])
<decord.video_reader.VideoReader object at 0x337a47910>
>>> print(ds[0]["video"][0])
tensor([[[73, 73, 73],
[73, 73, 73],
[73, 73, 73],
...,
[23, 23, 23],
[23, 23, 23],
[23, 23, 23]]], dtype=torch.uint8)
``` | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7230/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7230/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/7229 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7229/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7229/comments | https://api.github.com/repos/huggingface/datasets/issues/7229/events | https://github.com/huggingface/datasets/pull/7229 | 2,588,847,398 | PR_kwDODunzps5-rgrx | 7,229 | handle config_name=None in push_to_hub | {
"avatar_url": "https://avatars.githubusercontent.com/u/5719745?v=4",
"events_url": "https://api.github.com/users/alex-hh/events{/privacy}",
"followers_url": "https://api.github.com/users/alex-hh/followers",
"following_url": "https://api.github.com/users/alex-hh/following{/other_user}",
"gists_url": "https://api.github.com/users/alex-hh/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/alex-hh",
"id": 5719745,
"login": "alex-hh",
"node_id": "MDQ6VXNlcjU3MTk3NDU=",
"organizations_url": "https://api.github.com/users/alex-hh/orgs",
"received_events_url": "https://api.github.com/users/alex-hh/received_events",
"repos_url": "https://api.github.com/users/alex-hh/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/alex-hh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alex-hh/subscriptions",
"type": "User",
"url": "https://api.github.com/users/alex-hh",
"user_view_type": "public"
} | [] | open | false | null | [] | null | 0 | 2024-10-15T13:48:57Z | 2024-10-15T13:48:57Z | null | CONTRIBUTOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/7229.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7229",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/7229.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7229"
} | This caught me out - thought it might be better to explicitly handle None? | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7229/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7229/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/7228 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7228/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7228/comments | https://api.github.com/repos/huggingface/datasets/issues/7228/events | https://github.com/huggingface/datasets/issues/7228 | 2,587,310,094 | I_kwDODunzps6aNzgO | 7,228 | Composite (multi-column) features | {
"avatar_url": "https://avatars.githubusercontent.com/u/5719745?v=4",
"events_url": "https://api.github.com/users/alex-hh/events{/privacy}",
"followers_url": "https://api.github.com/users/alex-hh/followers",
"following_url": "https://api.github.com/users/alex-hh/following{/other_user}",
"gists_url": "https://api.github.com/users/alex-hh/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/alex-hh",
"id": 5719745,
"login": "alex-hh",
"node_id": "MDQ6VXNlcjU3MTk3NDU=",
"organizations_url": "https://api.github.com/users/alex-hh/orgs",
"received_events_url": "https://api.github.com/users/alex-hh/received_events",
"repos_url": "https://api.github.com/users/alex-hh/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/alex-hh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alex-hh/subscriptions",
"type": "User",
"url": "https://api.github.com/users/alex-hh",
"user_view_type": "public"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | null | 0 | 2024-10-14T23:59:19Z | 2024-10-15T11:17:15Z | null | CONTRIBUTOR | null | null | null | ### Feature request
Structured data types (graphs etc.) might often be most efficiently stored as multiple columns, which then need to be combined during feature decoding
Although it is currently possible to nest features as structs, my impression is that in particular when dealing with e.g. a feature composed of multiple numpy array / ArrayXD's, it would be more efficient to store each ArrayXD as a separate column (though I'm not sure by how much)
Perhaps specification / implementation could be supported by something like:
```
features=Features(**{("feature0", "feature1")=Features(feature0=Array2D((None,10), dtype="float32"), feature1=Array2D((None,10), dtype="float32"))
```
### Motivation
Defining efficient composite feature types based on numpy arrays for representing data such as graphs with multiple node and edge attributes is currently challenging.
### Your contribution
Possibly able to contribute | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7228/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7228/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7227 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7227/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7227/comments | https://api.github.com/repos/huggingface/datasets/issues/7227/events | https://github.com/huggingface/datasets/pull/7227 | 2,587,048,312 | PR_kwDODunzps5-l-OE | 7,227 | fast array extraction | {
"avatar_url": "https://avatars.githubusercontent.com/u/5719745?v=4",
"events_url": "https://api.github.com/users/alex-hh/events{/privacy}",
"followers_url": "https://api.github.com/users/alex-hh/followers",
"following_url": "https://api.github.com/users/alex-hh/following{/other_user}",
"gists_url": "https://api.github.com/users/alex-hh/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/alex-hh",
"id": 5719745,
"login": "alex-hh",
"node_id": "MDQ6VXNlcjU3MTk3NDU=",
"organizations_url": "https://api.github.com/users/alex-hh/orgs",
"received_events_url": "https://api.github.com/users/alex-hh/received_events",
"repos_url": "https://api.github.com/users/alex-hh/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/alex-hh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alex-hh/subscriptions",
"type": "User",
"url": "https://api.github.com/users/alex-hh",
"user_view_type": "public"
} | [] | open | false | null | [] | null | 3 | 2024-10-14T20:51:32Z | 2024-10-18T11:36:55Z | null | CONTRIBUTOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/7227.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7227",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/7227.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7227"
} | Implements #7210 using method suggested in https://github.com/huggingface/datasets/pull/7207#issuecomment-2411789307
```python
import numpy as np
from datasets import Dataset, Features, Array3D
features=Features(**{"array0": Array3D((None, 10, 10), dtype="float32"), "array1": Array3D((None,10,10), dtype="float32")})
dataset = Dataset.from_dict({f"array{i}": [np.zeros((x,10,10), dtype=np.float32) for x in [2000,1000]*25] for i in range(2)}, features=features)
```
~0.02 s vs 0.9s on main
```python
ds = dataset.to_iterable_dataset()
t0 = time.time()
for ex in ds:
pass
t1 = time.time()
```
< 0.01 s vs 1.3 s on main
@lhoestq I can see this breaks a bunch of array-related tests but can update the test cases if you would support making this change?
I also added an Array1D feature which will always be decoded into a numpy array and likewise improves extraction performance:
```python
from datasets import Dataset, Features, Array1D, Sequence, Value
array_features=Features(**{"array0": Array1D((None,), dtype="float32"), "array1": Array1D((None,), dtype="float32")})
sequence_features=Features(**{"array0": Sequence(feature=Value("float32"), length=-1), "array1": Sequence(feature=Value("float32"), length=-1)})
array_dataset = Dataset.from_dict({f"array{i}": [np.zeros((x,), dtype=np.float32) for x in [20000,10000]*25] for i in range(2)}, features=array_features)
sequence_dataset = Dataset.from_dict({f"array{i}": [np.zeros((x,), dtype=np.float32) for x in [20000,10000]*25] for i in range(2)}, features=sequence_features)
```python
t0 = time.time()
for ex in array_dataset.to_iterable_dataset():
pass
t1 = time.time()
```
< 0.01 s
```python
t0 = time.time()
for ex in sequence_dataset.to_iterable_dataset():
pass
t1 = time.time()
```
~1.1s
And also added support for extracting structs of arrays as dicts of numpy arrays:
```python
import numpy as np
from datasets import Dataset, Features, Array3D, Sequence
features=Features(struct={"array0": Array3D((None,10,10), dtype="float32"), "array1": Array3D((None,10,10), dtype="float32")}, _list=Sequence(feature=Array3D((None,10,10), dtype="float32")))
dataset = Dataset.from_dict({"struct": [{f"array{i}": np.zeros((x,10,10), dtype=np.float32) for i in range(2)} for x in [2000,1000]*25], "_list": [[np.zeros((x,10,10), dtype=np.float32) for i in range(2)] for x in [2000,1000]*25]}, features=features)
```
```python
t0 = time.time()
for ex in dataset.to_iterable_dataset():
pass
t1 = time.time()
assert isinstance(ex["struct"]["array0"], np.ndarray) and ex["struct"]["array0"].ndim == 3
```
~0.02 s and no exception vs ~7s with an exception on main | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7227/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7227/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/7226 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7226/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7226/comments | https://api.github.com/repos/huggingface/datasets/issues/7226/events | https://github.com/huggingface/datasets/issues/7226 | 2,586,920,351 | I_kwDODunzps6aMUWf | 7,226 | Add R as a How to use from the Polars (R) Library as an option | {
"avatar_url": "https://avatars.githubusercontent.com/u/45013044?v=4",
"events_url": "https://api.github.com/users/ran-codes/events{/privacy}",
"followers_url": "https://api.github.com/users/ran-codes/followers",
"following_url": "https://api.github.com/users/ran-codes/following{/other_user}",
"gists_url": "https://api.github.com/users/ran-codes/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ran-codes",
"id": 45013044,
"login": "ran-codes",
"node_id": "MDQ6VXNlcjQ1MDEzMDQ0",
"organizations_url": "https://api.github.com/users/ran-codes/orgs",
"received_events_url": "https://api.github.com/users/ran-codes/received_events",
"repos_url": "https://api.github.com/users/ran-codes/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ran-codes/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ran-codes/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ran-codes",
"user_view_type": "public"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | null | 0 | 2024-10-14T19:56:07Z | 2024-10-14T19:57:13Z | null | NONE | null | null | null | ### Feature request
The boiler plate code to access a dataset via the hugging face file system is very useful. Please addd
## Add Polars (R) option
The equivailent code works, because the [Polars-R](https://github.com/pola-rs/r-polars) wrapper has hugging faces funcitonaliy as well.
```r
library(polars)
df <- pl$read_parquet("hf://datasets/SALURBAL/core__admin_cube_public/core__admin_cube_public.parquet")
```
## Polars (python) option

## Libraries Currently

### Motivation
There are many data/analysis/research/statistics teams (particularly in academia and pharma) that use R as the default language. R has great integration with most of the newer data techs (arrow, parquet, polars) and having this included could really help in bringing this community into the hugging faces ecosystem.
**This is a small/low-hanging-fruit front end change but would make a big impact expanding the community**
### Your contribution
I am not sure which repositroy this should be in, but I have experience in R, Python and JS and happy to submit a PR in the appropriate repository. | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7226/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7226/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7225 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7225/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7225/comments | https://api.github.com/repos/huggingface/datasets/issues/7225/events | https://github.com/huggingface/datasets/issues/7225 | 2,586,229,216 | I_kwDODunzps6aJrng | 7,225 | Huggingface GIT returns null as Content-Type instead of application/x-git-receive-pack-result | {
"avatar_url": "https://avatars.githubusercontent.com/u/3961950?v=4",
"events_url": "https://api.github.com/users/padmalcom/events{/privacy}",
"followers_url": "https://api.github.com/users/padmalcom/followers",
"following_url": "https://api.github.com/users/padmalcom/following{/other_user}",
"gists_url": "https://api.github.com/users/padmalcom/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/padmalcom",
"id": 3961950,
"login": "padmalcom",
"node_id": "MDQ6VXNlcjM5NjE5NTA=",
"organizations_url": "https://api.github.com/users/padmalcom/orgs",
"received_events_url": "https://api.github.com/users/padmalcom/received_events",
"repos_url": "https://api.github.com/users/padmalcom/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/padmalcom/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/padmalcom/subscriptions",
"type": "User",
"url": "https://api.github.com/users/padmalcom",
"user_view_type": "public"
} | [] | open | false | null | [] | null | 0 | 2024-10-14T14:33:06Z | 2024-10-14T14:33:06Z | null | NONE | null | null | null | ### Describe the bug
We push changes to our datasets programmatically. Our git client jGit reports that the hf git server returns null as Content-Type after a push.
### Steps to reproduce the bug
A basic kotlin application:
```
val person = PersonIdent(
"padmalcom",
"[email protected]"
)
val cp = UsernamePasswordCredentialsProvider(
"padmalcom",
"mysecrettoken"
)
val git =
KGit.cloneRepository {
setURI("https://huggingface.co/datasets/sth/images")
setTimeout(60)
setProgressMonitor(TextProgressMonitor())
setCredentialsProvider(cp)
}
FileOutputStream("./images/images.csv").apply { writeCsv(images) }
git.add {
addFilepattern("images.csv")
}
for (i in images) {
FileUtils.copyFile(
File("./files/${i.id}"),
File("./images/${i.id + File(i.fileName).extension }")
)
git.add {
addFilepattern("${i.id + File(i.fileName).extension }")
}
}
val revCommit = git.commit {
author = person
message = "Uploading images at " + LocalDateTime.now()
.format(DateTimeFormatter.ISO_DATE_TIME)
setCredentialsProvider(cp)
}
val push = git.push {
setCredentialsProvider(cp)
}
```
### Expected behavior
The git server is expected to return the Content-Type _application/x-git-receive-pack-result_.
### Environment info
It is independent from the datasets library. | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7225/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7225/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7224 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7224/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7224/comments | https://api.github.com/repos/huggingface/datasets/issues/7224/events | https://github.com/huggingface/datasets/pull/7224 | 2,583,233,980 | PR_kwDODunzps5-bLNR | 7,224 | fallback to default feature casting in case custom features not available during dataset loading | {
"avatar_url": "https://avatars.githubusercontent.com/u/5719745?v=4",
"events_url": "https://api.github.com/users/alex-hh/events{/privacy}",
"followers_url": "https://api.github.com/users/alex-hh/followers",
"following_url": "https://api.github.com/users/alex-hh/following{/other_user}",
"gists_url": "https://api.github.com/users/alex-hh/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/alex-hh",
"id": 5719745,
"login": "alex-hh",
"node_id": "MDQ6VXNlcjU3MTk3NDU=",
"organizations_url": "https://api.github.com/users/alex-hh/orgs",
"received_events_url": "https://api.github.com/users/alex-hh/received_events",
"repos_url": "https://api.github.com/users/alex-hh/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/alex-hh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alex-hh/subscriptions",
"type": "User",
"url": "https://api.github.com/users/alex-hh",
"user_view_type": "public"
} | [] | open | false | null | [] | null | 0 | 2024-10-12T16:13:56Z | 2024-10-12T16:13:56Z | null | CONTRIBUTOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/7224.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7224",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/7224.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7224"
} | a fix for #7223 in case datasets is happy to support this kind of extensibility! seems cool / powerful for allowing sharing of datasets with potentially different feature types | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7224/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7224/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/7223 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7223/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7223/comments | https://api.github.com/repos/huggingface/datasets/issues/7223/events | https://github.com/huggingface/datasets/issues/7223 | 2,583,231,590 | I_kwDODunzps6Z-Pxm | 7,223 | Fallback to arrow defaults when loading dataset with custom features that aren't registered locally | {
"avatar_url": "https://avatars.githubusercontent.com/u/5719745?v=4",
"events_url": "https://api.github.com/users/alex-hh/events{/privacy}",
"followers_url": "https://api.github.com/users/alex-hh/followers",
"following_url": "https://api.github.com/users/alex-hh/following{/other_user}",
"gists_url": "https://api.github.com/users/alex-hh/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/alex-hh",
"id": 5719745,
"login": "alex-hh",
"node_id": "MDQ6VXNlcjU3MTk3NDU=",
"organizations_url": "https://api.github.com/users/alex-hh/orgs",
"received_events_url": "https://api.github.com/users/alex-hh/received_events",
"repos_url": "https://api.github.com/users/alex-hh/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/alex-hh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alex-hh/subscriptions",
"type": "User",
"url": "https://api.github.com/users/alex-hh",
"user_view_type": "public"
} | [] | open | false | null | [] | null | 0 | 2024-10-12T16:08:20Z | 2024-10-12T16:08:20Z | null | CONTRIBUTOR | null | null | null | ### Describe the bug
Datasets allows users to create and register custom features.
However if datasets are then pushed to the hub, this means that anyone calling load_dataset without registering the custom Features in the same way as the dataset creator will get an error message.
It would be nice to offer a fallback in this case.
### Steps to reproduce the bug
```python
load_dataset("alex-hh/custom-features-example")
```
(Dataset creation process - must be run in separate session so that NewFeature isn't registered in session in which download is attempted:)
```python
from dataclasses import dataclass, field
import pyarrow as pa
from datasets.features.features import register_feature
from datasets import Dataset, Features, Value, load_dataset
from datasets import Feature
@dataclass
class NewFeature(Feature):
_type: str = field(default="NewFeature", init=False, repr=False)
def __call__(self):
return pa.int32()
def examples_generator():
for i in range(5):
yield {"feature": i}
ds = Dataset.from_generator(examples_generator, features=Features(feature=NewFeature()))
ds.push_to_hub("alex-hh/custom-features-example")
register_feature(NewFeature, "NewFeature")
```
### Expected behavior
It would be nice, and offer greater extensibility, if there was some kind of graceful fallback mechanism in place for cases where user-defined features are stored in the dataset but not available locally.
### Environment info
3.0.2 | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7223/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7223/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7222 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7222/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7222/comments | https://api.github.com/repos/huggingface/datasets/issues/7222/events | https://github.com/huggingface/datasets/issues/7222 | 2,582,678,033 | I_kwDODunzps6Z8IoR | 7,222 | TypeError: Couldn't cast array of type string to null in long json | {
"avatar_url": "https://avatars.githubusercontent.com/u/5142577?v=4",
"events_url": "https://api.github.com/users/nokados/events{/privacy}",
"followers_url": "https://api.github.com/users/nokados/followers",
"following_url": "https://api.github.com/users/nokados/following{/other_user}",
"gists_url": "https://api.github.com/users/nokados/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/nokados",
"id": 5142577,
"login": "nokados",
"node_id": "MDQ6VXNlcjUxNDI1Nzc=",
"organizations_url": "https://api.github.com/users/nokados/orgs",
"received_events_url": "https://api.github.com/users/nokados/received_events",
"repos_url": "https://api.github.com/users/nokados/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/nokados/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nokados/subscriptions",
"type": "User",
"url": "https://api.github.com/users/nokados",
"user_view_type": "public"
} | [] | open | false | null | [] | null | 0 | 2024-10-12T08:14:59Z | 2024-10-12T08:14:59Z | null | NONE | null | null | null | ### Describe the bug
In general, changing the type from string to null is allowed within a dataset — there are even examples of this in the documentation.
However, if the dataset is large and unevenly distributed, this allowance stops working. The schema gets locked in after reading a chunk.
Consequently, if all values in the first chunk of a field are, for example, null, the field will be locked as type null, and if a string appears in that field in the second chunk, it will trigger this error:
<details>
<summary>Traceback </summary>
```
TypeError Traceback (most recent call last)
[/usr/local/lib/python3.10/dist-packages/datasets/builder.py](https://localhost:8080/#) in _prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, job_id)
1868 try:
-> 1869 writer.write_table(table)
1870 except CastError as cast_error:
14 frames
[/usr/local/lib/python3.10/dist-packages/datasets/arrow_writer.py](https://localhost:8080/#) in write_table(self, pa_table, writer_batch_size)
579 pa_table = pa_table.combine_chunks()
--> 580 pa_table = table_cast(pa_table, self._schema)
581 if self.embed_local_files:
[/usr/local/lib/python3.10/dist-packages/datasets/table.py](https://localhost:8080/#) in table_cast(table, schema)
2291 if table.schema != schema:
-> 2292 return cast_table_to_schema(table, schema)
2293 elif table.schema.metadata != schema.metadata:
[/usr/local/lib/python3.10/dist-packages/datasets/table.py](https://localhost:8080/#) in cast_table_to_schema(table, schema)
2244 )
-> 2245 arrays = [
2246 cast_array_to_feature(
[/usr/local/lib/python3.10/dist-packages/datasets/table.py](https://localhost:8080/#) in <listcomp>(.0)
2245 arrays = [
-> 2246 cast_array_to_feature(
2247 table[name] if name in table_column_names else pa.array([None] * len(table), type=schema.field(name).type),
[/usr/local/lib/python3.10/dist-packages/datasets/table.py](https://localhost:8080/#) in wrapper(array, *args, **kwargs)
1794 if isinstance(array, pa.ChunkedArray):
-> 1795 return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
1796 else:
[/usr/local/lib/python3.10/dist-packages/datasets/table.py](https://localhost:8080/#) in <listcomp>(.0)
1794 if isinstance(array, pa.ChunkedArray):
-> 1795 return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
1796 else:
[/usr/local/lib/python3.10/dist-packages/datasets/table.py](https://localhost:8080/#) in cast_array_to_feature(array, feature, allow_primitive_to_str, allow_decimal_to_str)
2101 elif not isinstance(feature, (Sequence, dict, list, tuple)):
-> 2102 return array_cast(
2103 array,
[/usr/local/lib/python3.10/dist-packages/datasets/table.py](https://localhost:8080/#) in wrapper(array, *args, **kwargs)
1796 else:
-> 1797 return func(array, *args, **kwargs)
1798
[/usr/local/lib/python3.10/dist-packages/datasets/table.py](https://localhost:8080/#) in array_cast(array, pa_type, allow_primitive_to_str, allow_decimal_to_str)
1947 if pa.types.is_null(pa_type) and not pa.types.is_null(array.type):
-> 1948 raise TypeError(f"Couldn't cast array of type {_short_str(array.type)} to {_short_str(pa_type)}")
1949 return array.cast(pa_type)
TypeError: Couldn't cast array of type string to null
The above exception was the direct cause of the following exception:
DatasetGenerationError Traceback (most recent call last)
[<ipython-input-353-e02f83980611>](https://localhost:8080/#) in <cell line: 1>()
----> 1 dd = load_dataset("json", data_files=["TEST.json"])
[/usr/local/lib/python3.10/dist-packages/datasets/load.py](https://localhost:8080/#) in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, keep_in_memory, save_infos, revision, token, streaming, num_proc, storage_options, trust_remote_code, **config_kwargs)
2094
2095 # Download and prepare data
-> 2096 builder_instance.download_and_prepare(
2097 download_config=download_config,
2098 download_mode=download_mode,
[/usr/local/lib/python3.10/dist-packages/datasets/builder.py](https://localhost:8080/#) in download_and_prepare(self, output_dir, download_config, download_mode, verification_mode, dl_manager, base_path, file_format, max_shard_size, num_proc, storage_options, **download_and_prepare_kwargs)
922 if num_proc is not None:
923 prepare_split_kwargs["num_proc"] = num_proc
--> 924 self._download_and_prepare(
925 dl_manager=dl_manager,
926 verification_mode=verification_mode,
[/usr/local/lib/python3.10/dist-packages/datasets/builder.py](https://localhost:8080/#) in _download_and_prepare(self, dl_manager, verification_mode, **prepare_split_kwargs)
997 try:
998 # Prepare split will record examples associated to the split
--> 999 self._prepare_split(split_generator, **prepare_split_kwargs)
1000 except OSError as e:
1001 raise OSError(
[/usr/local/lib/python3.10/dist-packages/datasets/builder.py](https://localhost:8080/#) in _prepare_split(self, split_generator, file_format, num_proc, max_shard_size)
1738 job_id = 0
1739 with pbar:
-> 1740 for job_id, done, content in self._prepare_split_single(
1741 gen_kwargs=gen_kwargs, job_id=job_id, **_prepare_split_args
1742 ):
[/usr/local/lib/python3.10/dist-packages/datasets/builder.py](https://localhost:8080/#) in _prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, job_id)
1894 if isinstance(e, DatasetGenerationError):
1895 raise
-> 1896 raise DatasetGenerationError("An error occurred while generating the dataset") from e
1897
1898 yield job_id, True, (total_num_examples, total_num_bytes, writer._features, num_shards, shard_lengths)
DatasetGenerationError: An error occurred while generating the dataset
```
</details>
### Steps to reproduce the bug
```python
import json
from datasets import load_dataset
with open("TEST.json", "w") as f:
row = {"ballast": "qwerty" * 1000, "b": None}
row_str = json.dumps(row) + "\n"
line_size = len(row_str)
chunk_size = 10 << 20
lines_in_chunk = chunk_size // line_size + 1
print(f"Writing {lines_in_chunk} lines")
for i in range(lines_in_chunk):
f.write(row_str)
null_row = {"ballast": "Gotcha", "b": "Not Null"}
f.write(json.dumps(null_row) + "\n")
load_dataset("json", data_files=["TEST.json"])
```
### Expected behavior
Concatenation of the chunks without errors
### Environment info
- `datasets` version: 3.0.1
- Platform: Linux-6.1.85+-x86_64-with-glibc2.35
- Python version: 3.10.12
- `huggingface_hub` version: 0.24.7
- PyArrow version: 16.1.0
- Pandas version: 2.2.2
- `fsspec` version: 2024.6.1 | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7222/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7222/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7221 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7221/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7221/comments | https://api.github.com/repos/huggingface/datasets/issues/7221/events | https://github.com/huggingface/datasets/pull/7221 | 2,582,114,631 | PR_kwDODunzps5-YNI8 | 7,221 | add Feature base class to improve support for user-defined features | {
"avatar_url": "https://avatars.githubusercontent.com/u/5719745?v=4",
"events_url": "https://api.github.com/users/alex-hh/events{/privacy}",
"followers_url": "https://api.github.com/users/alex-hh/followers",
"following_url": "https://api.github.com/users/alex-hh/following{/other_user}",
"gists_url": "https://api.github.com/users/alex-hh/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/alex-hh",
"id": 5719745,
"login": "alex-hh",
"node_id": "MDQ6VXNlcjU3MTk3NDU=",
"organizations_url": "https://api.github.com/users/alex-hh/orgs",
"received_events_url": "https://api.github.com/users/alex-hh/received_events",
"repos_url": "https://api.github.com/users/alex-hh/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/alex-hh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alex-hh/subscriptions",
"type": "User",
"url": "https://api.github.com/users/alex-hh",
"user_view_type": "public"
} | [] | open | false | null | [] | null | 0 | 2024-10-11T20:10:27Z | 2024-10-15T20:44:35Z | null | CONTRIBUTOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/7221.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7221",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/7221.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7221"
} | intended as fix for #7220 if this kind of extensibility is something that datasets is willing to support!
```python
class ListOfStrs:
requires_encoding = True
def encode_example(self, value):
if isinstance(value, str):
return [str]
else:
return value
feats = Features(strlist=ListOfStrs())
feats.encode_example({"strlist": "a"})["strlist"] == feats["strlist"].encode_example("a")
``` | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7221/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7221/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/7220 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7220/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7220/comments | https://api.github.com/repos/huggingface/datasets/issues/7220/events | https://github.com/huggingface/datasets/issues/7220 | 2,582,036,110 | I_kwDODunzps6Z5r6O | 7,220 | Custom features not compatible with special encoding/decoding logic | {
"avatar_url": "https://avatars.githubusercontent.com/u/5719745?v=4",
"events_url": "https://api.github.com/users/alex-hh/events{/privacy}",
"followers_url": "https://api.github.com/users/alex-hh/followers",
"following_url": "https://api.github.com/users/alex-hh/following{/other_user}",
"gists_url": "https://api.github.com/users/alex-hh/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/alex-hh",
"id": 5719745,
"login": "alex-hh",
"node_id": "MDQ6VXNlcjU3MTk3NDU=",
"organizations_url": "https://api.github.com/users/alex-hh/orgs",
"received_events_url": "https://api.github.com/users/alex-hh/received_events",
"repos_url": "https://api.github.com/users/alex-hh/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/alex-hh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alex-hh/subscriptions",
"type": "User",
"url": "https://api.github.com/users/alex-hh",
"user_view_type": "public"
} | [] | open | false | null | [] | null | 0 | 2024-10-11T19:20:11Z | 2024-10-11T20:15:57Z | null | CONTRIBUTOR | null | null | null | ### Describe the bug
It is possible to register custom features using datasets.features.features.register_feature (https://github.com/huggingface/datasets/pull/6727)
However such features are not compatible with Features.encode_example/decode_example if they require special encoding / decoding logic because encode_nested_example / decode_nested_example checks whether the feature is in a fixed list of encodable types:
https://github.com/huggingface/datasets/blob/16a121d7821a7691815a966270f577e2c503473f/src/datasets/features/features.py#L1349
This prevents the extensibility of features to complex cases
### Steps to reproduce the bug
```python
class ListOfStrs:
def encode_example(self, value):
if isinstance(value, str):
return [str]
else:
return value
feats = Features(strlist=ListOfStrs())
assert feats.encode_example({"strlist": "a"})["strlist"] = feats["strlist"].encode_example("a")}
```
### Expected behavior
Registered feature types should be encoded based on some property of the feature (e.g. requires_encoding)?
### Environment info
3.0.2 | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7220/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7220/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7219 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7219/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7219/comments | https://api.github.com/repos/huggingface/datasets/issues/7219/events | https://github.com/huggingface/datasets/pull/7219 | 2,581,708,084 | PR_kwDODunzps5-XAQF | 7,219 | bump fsspec | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | 1 | 2024-10-11T15:56:36Z | 2024-10-14T08:21:56Z | 2024-10-14T08:21:55Z | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/7219.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7219",
"merged_at": "2024-10-14T08:21:55Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7219.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7219"
} | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7219/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7219/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/7217 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7217/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7217/comments | https://api.github.com/repos/huggingface/datasets/issues/7217/events | https://github.com/huggingface/datasets/issues/7217 | 2,581,095,098 | I_kwDODunzps6Z2GK6 | 7,217 | ds.map(f, num_proc=10) is slower than df.apply | {
"avatar_url": "https://avatars.githubusercontent.com/u/178981231?v=4",
"events_url": "https://api.github.com/users/lanlanlanlanlanlan365/events{/privacy}",
"followers_url": "https://api.github.com/users/lanlanlanlanlanlan365/followers",
"following_url": "https://api.github.com/users/lanlanlanlanlanlan365/following{/other_user}",
"gists_url": "https://api.github.com/users/lanlanlanlanlanlan365/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lanlanlanlanlanlan365",
"id": 178981231,
"login": "lanlanlanlanlanlan365",
"node_id": "U_kgDOCqsJbw",
"organizations_url": "https://api.github.com/users/lanlanlanlanlanlan365/orgs",
"received_events_url": "https://api.github.com/users/lanlanlanlanlanlan365/received_events",
"repos_url": "https://api.github.com/users/lanlanlanlanlanlan365/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lanlanlanlanlanlan365/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lanlanlanlanlanlan365/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lanlanlanlanlanlan365",
"user_view_type": "public"
} | [] | open | false | null | [] | null | 0 | 2024-10-11T11:04:05Z | 2024-10-11T11:04:05Z | null | NONE | null | null | null | ### Describe the bug
pandas columns: song_id, song_name
ds = Dataset.from_pandas(df)
def has_cover(song_name):
if song_name is None or pd.isna(song_name):
return False
return 'cover' in song_name.lower()
df['has_cover'] = df.song_name.progress_apply(has_cover)
ds = ds.map(lambda x: {'has_cover': has_cover(x['song_name'])}, num_proc=10)
time cost:
1. df.apply: 100%|██████████| 12500592/12500592 [00:13<00:00, 959825.47it/s]
2. ds.map: Map (num_proc=10): 31%
3899028/12500592 [00:28<00:38, 222532.89 examples/s]
### Steps to reproduce the bug
pandas columns: song_id, song_name
ds = Dataset.from_pandas(df)
def has_cover(song_name):
if song_name is None or pd.isna(song_name):
return False
return 'cover' in song_name.lower()
df['has_cover'] = df.song_name.progress_apply(has_cover)
ds = ds.map(lambda x: {'has_cover': has_cover(x['song_name'])}, num_proc=10)
### Expected behavior
ds.map is ~num_proc faster than df.apply
### Environment info
pandas: 2.2.2
datasets: 2.19.1 | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7217/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7217/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7215 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7215/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7215/comments | https://api.github.com/repos/huggingface/datasets/issues/7215/events | https://github.com/huggingface/datasets/issues/7215 | 2,579,942,939 | I_kwDODunzps6Zxs4b | 7,215 | Iterable dataset map with explicit features causes slowdown for Sequence features | {
"avatar_url": "https://avatars.githubusercontent.com/u/5719745?v=4",
"events_url": "https://api.github.com/users/alex-hh/events{/privacy}",
"followers_url": "https://api.github.com/users/alex-hh/followers",
"following_url": "https://api.github.com/users/alex-hh/following{/other_user}",
"gists_url": "https://api.github.com/users/alex-hh/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/alex-hh",
"id": 5719745,
"login": "alex-hh",
"node_id": "MDQ6VXNlcjU3MTk3NDU=",
"organizations_url": "https://api.github.com/users/alex-hh/orgs",
"received_events_url": "https://api.github.com/users/alex-hh/received_events",
"repos_url": "https://api.github.com/users/alex-hh/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/alex-hh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alex-hh/subscriptions",
"type": "User",
"url": "https://api.github.com/users/alex-hh",
"user_view_type": "public"
} | [] | open | false | null | [] | null | 0 | 2024-10-10T22:08:20Z | 2024-10-10T22:10:32Z | null | CONTRIBUTOR | null | null | null | ### Describe the bug
When performing map, it's nice to be able to pass the new feature type, and indeed required by interleave and concatenate datasets.
However, this can cause a major slowdown for certain types of array features due to the features being re-encoded.
This is separate to the slowdown reported in #7206
### Steps to reproduce the bug
```
from datasets import Dataset, Features, Array3D, Sequence, Value
import numpy as np
import time
features=Features(**{"array0": Sequence(feature=Value("float32"), length=-1), "array1": Sequence(feature=Value("float32"), length=-1)})
dataset = Dataset.from_dict({f"array{i}": [np.zeros((x,), dtype=np.float32) for x in [5000,10000]*25] for i in range(2)}, features=features)
```
```
ds = dataset.to_iterable_dataset()
ds = ds.with_format("numpy").map(lambda x: x)
t0 = time.time()
for ex in ds:
pass
t1 = time.time()
```
~1.5 s on main
```
ds = dataset.to_iterable_dataset()
ds = ds.with_format("numpy").map(lambda x: x, features=features)
t0 = time.time()
for ex in ds:
pass
t1 = time.time()
```
~ 3 s on main
### Expected behavior
I'm not 100% sure whether passing new feature types to formatted outputs of map should be supported or not, but assuming it should, then there should be a cost-free way to specify the new feature type - knowing feature type is required by interleave_datasets and concatenate_datasets for example
### Environment info
3.0.2 | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7215/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7215/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7214 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7214/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7214/comments | https://api.github.com/repos/huggingface/datasets/issues/7214/events | https://github.com/huggingface/datasets/issues/7214 | 2,578,743,713 | I_kwDODunzps6ZtIGh | 7,214 | Formatted map + with_format(None) changes array dtype for iterable datasets | {
"avatar_url": "https://avatars.githubusercontent.com/u/5719745?v=4",
"events_url": "https://api.github.com/users/alex-hh/events{/privacy}",
"followers_url": "https://api.github.com/users/alex-hh/followers",
"following_url": "https://api.github.com/users/alex-hh/following{/other_user}",
"gists_url": "https://api.github.com/users/alex-hh/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/alex-hh",
"id": 5719745,
"login": "alex-hh",
"node_id": "MDQ6VXNlcjU3MTk3NDU=",
"organizations_url": "https://api.github.com/users/alex-hh/orgs",
"received_events_url": "https://api.github.com/users/alex-hh/received_events",
"repos_url": "https://api.github.com/users/alex-hh/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/alex-hh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alex-hh/subscriptions",
"type": "User",
"url": "https://api.github.com/users/alex-hh",
"user_view_type": "public"
} | [] | open | false | null | [] | null | 1 | 2024-10-10T12:45:16Z | 2024-10-12T16:55:57Z | null | CONTRIBUTOR | null | null | null | ### Describe the bug
When applying with_format -> map -> with_format(None), array dtypes seem to change, even if features are passed
### Steps to reproduce the bug
```python
features=Features(**{"array0": Array3D((None, 10, 10), dtype="float32")})
dataset = Dataset.from_dict({f"array0": [np.zeros((100,10,10), dtype=np.float32)]*25}, features=features)
ds = dataset.to_iterable_dataset().with_format("numpy").map(lambda x: x, features=features)
ex_0 = next(iter(ds))
ds = dataset.to_iterable_dataset().with_format("numpy").map(lambda x: x, features=features).with_format(None)
ex_1 = next(iter(ds))
assert ex_1["array0"].dtype == ex_0["array0"].dtype, f"{ex_1['array0'].dtype} {ex_0['array0'].dtype}"
```
### Expected behavior
Dtypes should be preserved.
### Environment info
3.0.2 | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7214/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7214/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7213 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7213/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7213/comments | https://api.github.com/repos/huggingface/datasets/issues/7213/events | https://github.com/huggingface/datasets/issues/7213 | 2,578,675,565 | I_kwDODunzps6Zs3dt | 7,213 | Add with_rank to Dataset.from_generator | {
"avatar_url": "https://avatars.githubusercontent.com/u/17828087?v=4",
"events_url": "https://api.github.com/users/muthissar/events{/privacy}",
"followers_url": "https://api.github.com/users/muthissar/followers",
"following_url": "https://api.github.com/users/muthissar/following{/other_user}",
"gists_url": "https://api.github.com/users/muthissar/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/muthissar",
"id": 17828087,
"login": "muthissar",
"node_id": "MDQ6VXNlcjE3ODI4MDg3",
"organizations_url": "https://api.github.com/users/muthissar/orgs",
"received_events_url": "https://api.github.com/users/muthissar/received_events",
"repos_url": "https://api.github.com/users/muthissar/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/muthissar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muthissar/subscriptions",
"type": "User",
"url": "https://api.github.com/users/muthissar",
"user_view_type": "public"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | null | 0 | 2024-10-10T12:15:29Z | 2024-10-10T12:17:11Z | null | NONE | null | null | null | ### Feature request
Add `with_rank` to `Dataset.from_generator` similar to `Dataset.map` and `Dataset.filter`.
### Motivation
As for `Dataset.map` and `Dataset.filter`, this is useful when creating cache files using multi-GPU, where the rank can be used to select GPU IDs. For now, rank can be added in the `gen_kwars` argument; however, this, in turn, includes the rank when computing the fingerprint.
### Your contribution
Added #7199 which passes rank based on the `job_id` set by `num_proc`. | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7213/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7213/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7212 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7212/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7212/comments | https://api.github.com/repos/huggingface/datasets/issues/7212/events | https://github.com/huggingface/datasets/issues/7212 | 2,578,641,259 | I_kwDODunzps6ZsvFr | 7,212 | Windows do not supprot signal.alarm and singal.signal | {
"avatar_url": "https://avatars.githubusercontent.com/u/33832672?v=4",
"events_url": "https://api.github.com/users/TomasJavurek/events{/privacy}",
"followers_url": "https://api.github.com/users/TomasJavurek/followers",
"following_url": "https://api.github.com/users/TomasJavurek/following{/other_user}",
"gists_url": "https://api.github.com/users/TomasJavurek/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/TomasJavurek",
"id": 33832672,
"login": "TomasJavurek",
"node_id": "MDQ6VXNlcjMzODMyNjcy",
"organizations_url": "https://api.github.com/users/TomasJavurek/orgs",
"received_events_url": "https://api.github.com/users/TomasJavurek/received_events",
"repos_url": "https://api.github.com/users/TomasJavurek/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/TomasJavurek/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TomasJavurek/subscriptions",
"type": "User",
"url": "https://api.github.com/users/TomasJavurek",
"user_view_type": "public"
} | [] | open | false | null | [] | null | 0 | 2024-10-10T12:00:19Z | 2024-10-10T12:00:19Z | null | NONE | null | null | null | ### Describe the bug
signal.alarm and signal.signal are used in the load.py module, but these are not supported by Windows.
### Steps to reproduce the bug
lighteval accelerate --model_args "pretrained=gpt2,trust_remote_code=True" --tasks "community|kinit_sts" --custom_tasks "community_tasks/kinit_evals.py" --output_dir "./evals"
### Expected behavior
proceed with input(..) method
### Environment info
Windows 11 | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7212/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7212/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7211 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7211/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7211/comments | https://api.github.com/repos/huggingface/datasets/issues/7211/events | https://github.com/huggingface/datasets/issues/7211 | 2,576,400,502 | I_kwDODunzps6ZkMB2 | 7,211 | Describe only selected fields in README | {
"avatar_url": "https://avatars.githubusercontent.com/u/67658835?v=4",
"events_url": "https://api.github.com/users/alozowski/events{/privacy}",
"followers_url": "https://api.github.com/users/alozowski/followers",
"following_url": "https://api.github.com/users/alozowski/following{/other_user}",
"gists_url": "https://api.github.com/users/alozowski/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/alozowski",
"id": 67658835,
"login": "alozowski",
"node_id": "MDQ6VXNlcjY3NjU4ODM1",
"organizations_url": "https://api.github.com/users/alozowski/orgs",
"received_events_url": "https://api.github.com/users/alozowski/received_events",
"repos_url": "https://api.github.com/users/alozowski/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/alozowski/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alozowski/subscriptions",
"type": "User",
"url": "https://api.github.com/users/alozowski",
"user_view_type": "public"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | null | 0 | 2024-10-09T16:25:47Z | 2024-10-09T16:25:47Z | null | NONE | null | null | null | ### Feature request
Hi Datasets team!
Is it possible to add the ability to describe only selected fields of the dataset files in `README.md`? For example, I have this open dataset ([open-llm-leaderboard/results](https://huggingface.co/datasets/open-llm-leaderboard/results?row=0)) and I want to describe only some fields in order not to overcomplicate the Dataset Preview and filter out some fields
### Motivation
The `Results` dataset for the Open LLM Leaderboard contains json files with a complex nested structure. I would like to add `README.md` there to use the SQL console, for example. But if I describe the structure of this dataset completely, it will overcomplicate the use of Dataset Preview and the total number of columns will exceed 50
### Your contribution
I'm afraid I'm not familiar with the project structure, so I won't be able to open a PR, but I'll try to help with something else if possible | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7211/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7211/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7210 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7210/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7210/comments | https://api.github.com/repos/huggingface/datasets/issues/7210/events | https://github.com/huggingface/datasets/issues/7210 | 2,575,883,939 | I_kwDODunzps6ZiN6j | 7,210 | Convert Array features to numpy arrays rather than lists by default | {
"avatar_url": "https://avatars.githubusercontent.com/u/5719745?v=4",
"events_url": "https://api.github.com/users/alex-hh/events{/privacy}",
"followers_url": "https://api.github.com/users/alex-hh/followers",
"following_url": "https://api.github.com/users/alex-hh/following{/other_user}",
"gists_url": "https://api.github.com/users/alex-hh/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/alex-hh",
"id": 5719745,
"login": "alex-hh",
"node_id": "MDQ6VXNlcjU3MTk3NDU=",
"organizations_url": "https://api.github.com/users/alex-hh/orgs",
"received_events_url": "https://api.github.com/users/alex-hh/received_events",
"repos_url": "https://api.github.com/users/alex-hh/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/alex-hh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alex-hh/subscriptions",
"type": "User",
"url": "https://api.github.com/users/alex-hh",
"user_view_type": "public"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | null | 0 | 2024-10-09T13:05:21Z | 2024-10-09T13:05:21Z | null | CONTRIBUTOR | null | null | null | ### Feature request
It is currently quite easy to cause massive slowdowns when using datasets and not familiar with the underlying data conversions by e.g. making bad choices of formatting.
Would it be more user-friendly to set defaults that avoid this as much as possible? e.g. format Array features as numpy arrays rather than python lists
### Motivation
Default array formatting leads to slow performance: e.g.
```python
import numpy as np
from datasets import Dataset, Features, Array3D
features=Features(**{"array0": Array3D((None, 10, 10), dtype="float32"), "array1": Array3D((None,10,10), dtype="float32")})
dataset = Dataset.from_dict({f"array{i}": [np.zeros((x,10,10), dtype=np.float32) for x in [2000,1000]*25] for i in range(2)}, features=features)
```
```python
t0 = time.time()
for ex in ds:
pass
t1 = time.time()
```
~1.4 s
```python
ds = dataset.to_iterable_dataset()
t0 = time.time()
for ex in ds:
pass
t1 = time.time()
```
~10s
```python
ds = dataset.with_format("numpy")
t0 = time.time()
for ex in ds:
pass
t1 = time.time()
```
~0.04s
```python
ds = dataset.to_iterable_dataset().with_format("numpy")
t0 = time.time()
for ex in ds:
pass
t1 = time.time()
```
~0.04s
### Your contribution
May be able to contribute | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7210/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7210/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7209 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7209/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7209/comments | https://api.github.com/repos/huggingface/datasets/issues/7209/events | https://github.com/huggingface/datasets/pull/7209 | 2,575,526,651 | PR_kwDODunzps5-D6_P | 7,209 | Preserve features in iterable dataset.filter | {
"avatar_url": "https://avatars.githubusercontent.com/u/5719745?v=4",
"events_url": "https://api.github.com/users/alex-hh/events{/privacy}",
"followers_url": "https://api.github.com/users/alex-hh/followers",
"following_url": "https://api.github.com/users/alex-hh/following{/other_user}",
"gists_url": "https://api.github.com/users/alex-hh/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/alex-hh",
"id": 5719745,
"login": "alex-hh",
"node_id": "MDQ6VXNlcjU3MTk3NDU=",
"organizations_url": "https://api.github.com/users/alex-hh/orgs",
"received_events_url": "https://api.github.com/users/alex-hh/received_events",
"repos_url": "https://api.github.com/users/alex-hh/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/alex-hh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alex-hh/subscriptions",
"type": "User",
"url": "https://api.github.com/users/alex-hh",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | 3 | 2024-10-09T10:42:05Z | 2024-10-16T11:27:22Z | 2024-10-09T16:04:07Z | CONTRIBUTOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/7209.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7209",
"merged_at": "2024-10-09T16:04:07Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7209.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7209"
} | Fixes example in #7208 - I'm not sure what other checks I should do? @lhoestq
I also haven't thought hard about the concatenate / interleaving example iterables but think this might work assuming that features are either all identical or None? | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7209/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7209/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/7208 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7208/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7208/comments | https://api.github.com/repos/huggingface/datasets/issues/7208/events | https://github.com/huggingface/datasets/issues/7208 | 2,575,484,256 | I_kwDODunzps6ZgsVg | 7,208 | Iterable dataset.filter should not override features | {
"avatar_url": "https://avatars.githubusercontent.com/u/5719745?v=4",
"events_url": "https://api.github.com/users/alex-hh/events{/privacy}",
"followers_url": "https://api.github.com/users/alex-hh/followers",
"following_url": "https://api.github.com/users/alex-hh/following{/other_user}",
"gists_url": "https://api.github.com/users/alex-hh/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/alex-hh",
"id": 5719745,
"login": "alex-hh",
"node_id": "MDQ6VXNlcjU3MTk3NDU=",
"organizations_url": "https://api.github.com/users/alex-hh/orgs",
"received_events_url": "https://api.github.com/users/alex-hh/received_events",
"repos_url": "https://api.github.com/users/alex-hh/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/alex-hh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alex-hh/subscriptions",
"type": "User",
"url": "https://api.github.com/users/alex-hh",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | 1 | 2024-10-09T10:23:45Z | 2024-10-09T16:08:46Z | 2024-10-09T16:08:45Z | CONTRIBUTOR | null | null | null | ### Describe the bug
When calling filter on an iterable dataset, the features get set to None
### Steps to reproduce the bug
import numpy as np
import time
from datasets import Dataset, Features, Array3D
```python
features=Features(**{"array0": Array3D((None, 10, 10), dtype="float32"), "array1": Array3D((None,10,10), dtype="float32")})
dataset = Dataset.from_dict({f"array{i}": [np.zeros((x,10,10), dtype=np.float32) for x in [2000,1000]*25] for i in range(2)}, features=features)
ds = dataset.to_iterable_dataset()
orig_column_names = ds.column_names
ds = ds.filter(lambda x: True)
assert ds.column_names == orig_column_names
```
### Expected behavior
Filter should preserve features information
### Environment info
3.0.2 | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7208/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7208/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/7207 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7207/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7207/comments | https://api.github.com/repos/huggingface/datasets/issues/7207/events | https://github.com/huggingface/datasets/pull/7207 | 2,573,582,335 | PR_kwDODunzps59-Dms | 7,207 | apply formatting after iter_arrow to speed up format -> map, filter for iterable datasets | {
"avatar_url": "https://avatars.githubusercontent.com/u/5719745?v=4",
"events_url": "https://api.github.com/users/alex-hh/events{/privacy}",
"followers_url": "https://api.github.com/users/alex-hh/followers",
"following_url": "https://api.github.com/users/alex-hh/following{/other_user}",
"gists_url": "https://api.github.com/users/alex-hh/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/alex-hh",
"id": 5719745,
"login": "alex-hh",
"node_id": "MDQ6VXNlcjU3MTk3NDU=",
"organizations_url": "https://api.github.com/users/alex-hh/orgs",
"received_events_url": "https://api.github.com/users/alex-hh/received_events",
"repos_url": "https://api.github.com/users/alex-hh/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/alex-hh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alex-hh/subscriptions",
"type": "User",
"url": "https://api.github.com/users/alex-hh",
"user_view_type": "public"
} | [] | open | false | null | [] | null | 13 | 2024-10-08T15:44:53Z | 2024-10-17T12:30:28Z | null | CONTRIBUTOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/7207.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7207",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/7207.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7207"
} | I got to this by hacking around a bit but it seems to solve #7206
I have no idea if this approach makes sense or would break something else?
Could maybe work on a full pr if this looks reasonable @lhoestq ? I imagine the same issue might affect other iterable dataset methods? | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7207/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7207/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/7206 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7206/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7206/comments | https://api.github.com/repos/huggingface/datasets/issues/7206/events | https://github.com/huggingface/datasets/issues/7206 | 2,573,567,467 | I_kwDODunzps6ZZYXr | 7,206 | Slow iteration for iterable dataset with numpy formatting for array data | {
"avatar_url": "https://avatars.githubusercontent.com/u/5719745?v=4",
"events_url": "https://api.github.com/users/alex-hh/events{/privacy}",
"followers_url": "https://api.github.com/users/alex-hh/followers",
"following_url": "https://api.github.com/users/alex-hh/following{/other_user}",
"gists_url": "https://api.github.com/users/alex-hh/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/alex-hh",
"id": 5719745,
"login": "alex-hh",
"node_id": "MDQ6VXNlcjU3MTk3NDU=",
"organizations_url": "https://api.github.com/users/alex-hh/orgs",
"received_events_url": "https://api.github.com/users/alex-hh/received_events",
"repos_url": "https://api.github.com/users/alex-hh/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/alex-hh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alex-hh/subscriptions",
"type": "User",
"url": "https://api.github.com/users/alex-hh",
"user_view_type": "public"
} | [] | open | false | null | [] | null | 1 | 2024-10-08T15:38:11Z | 2024-10-17T17:14:52Z | null | CONTRIBUTOR | null | null | null | ### Describe the bug
When working with large arrays, setting with_format to e.g. numpy then applying map causes a significant slowdown for iterable datasets.
### Steps to reproduce the bug
```python
import numpy as np
import time
from datasets import Dataset, Features, Array3D
features=Features(**{"array0": Array3D((None, 10, 10), dtype="float32"), "array1": Array3D((None,10,10), dtype="float32")})
dataset = Dataset.from_dict({f"array{i}": [np.zeros((x,10,10), dtype=np.float32) for x in [2000,1000]*25] for i in range(2)}, features=features)
```
Then
```python
ds = dataset.to_iterable_dataset()
ds = ds.with_format("numpy").map(lambda x: x)
t0 = time.time()
for ex in ds:
pass
t1 = time.time()
print(t1-t0)
```
takes 27 s, whereas
```python
ds = dataset.to_iterable_dataset()
ds = ds.with_format("numpy")
ds = dataset.to_iterable_dataset()
t0 = time.time()
for ex in ds:
pass
t1 = time.time()
print(t1 - t0)
```
takes ~1s
### Expected behavior
Map should not introduce a slowdown when formatting is enabled.
### Environment info
3.0.2 | null | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7206/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7206/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7205 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7205/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7205/comments | https://api.github.com/repos/huggingface/datasets/issues/7205/events | https://github.com/huggingface/datasets/pull/7205 | 2,573,490,859 | PR_kwDODunzps599w0I | 7,205 | fix ci benchmark | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | 1 | 2024-10-08T15:06:18Z | 2024-10-08T15:25:28Z | 2024-10-08T15:25:25Z | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/7205.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7205",
"merged_at": "2024-10-08T15:25:25Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7205.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7205"
} | we're not using the benchmarks anymore + they were not working anyway due to token permissions
I keep the code in case we ever want to re-run the benchmark manually | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7205/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7205/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/7204 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7204/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7204/comments | https://api.github.com/repos/huggingface/datasets/issues/7204/events | https://github.com/huggingface/datasets/pull/7204 | 2,573,289,063 | PR_kwDODunzps599Hem | 7,204 | fix unbatched arrow map for iterable datasets | {
"avatar_url": "https://avatars.githubusercontent.com/u/5719745?v=4",
"events_url": "https://api.github.com/users/alex-hh/events{/privacy}",
"followers_url": "https://api.github.com/users/alex-hh/followers",
"following_url": "https://api.github.com/users/alex-hh/following{/other_user}",
"gists_url": "https://api.github.com/users/alex-hh/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/alex-hh",
"id": 5719745,
"login": "alex-hh",
"node_id": "MDQ6VXNlcjU3MTk3NDU=",
"organizations_url": "https://api.github.com/users/alex-hh/orgs",
"received_events_url": "https://api.github.com/users/alex-hh/received_events",
"repos_url": "https://api.github.com/users/alex-hh/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/alex-hh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alex-hh/subscriptions",
"type": "User",
"url": "https://api.github.com/users/alex-hh",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | 1 | 2024-10-08T13:54:09Z | 2024-10-08T14:19:47Z | 2024-10-08T14:19:47Z | CONTRIBUTOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/7204.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7204",
"merged_at": "2024-10-08T14:19:46Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7204.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7204"
} | Fixes the bug when applying map to an arrow-formatted iterable dataset described here:
https://github.com/huggingface/datasets/issues/6833#issuecomment-2399903885
```python
from datasets import load_dataset
ds = load_dataset("rotten_tomatoes", split="train", streaming=True)
ds = ds.with_format("arrow").map(lambda x: x)
for ex in ds:
pass
```
@lhoestq | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7204/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7204/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/7203 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7203/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7203/comments | https://api.github.com/repos/huggingface/datasets/issues/7203/events | https://github.com/huggingface/datasets/pull/7203 | 2,573,154,222 | PR_kwDODunzps598skr | 7,203 | with_format docstring | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | 1 | 2024-10-08T13:05:19Z | 2024-10-08T13:13:12Z | 2024-10-08T13:13:05Z | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/7203.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7203",
"merged_at": "2024-10-08T13:13:05Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7203.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7203"
} | reported at https://github.com/huggingface/datasets/issues/3444 | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7203/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7203/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/7202 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7202/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7202/comments | https://api.github.com/repos/huggingface/datasets/issues/7202/events | https://github.com/huggingface/datasets/issues/7202 | 2,572,583,798 | I_kwDODunzps6ZVoN2 | 7,202 | `from_parquet` return type annotation | {
"avatar_url": "https://avatars.githubusercontent.com/u/45285915?v=4",
"events_url": "https://api.github.com/users/saiden89/events{/privacy}",
"followers_url": "https://api.github.com/users/saiden89/followers",
"following_url": "https://api.github.com/users/saiden89/following{/other_user}",
"gists_url": "https://api.github.com/users/saiden89/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/saiden89",
"id": 45285915,
"login": "saiden89",
"node_id": "MDQ6VXNlcjQ1Mjg1OTE1",
"organizations_url": "https://api.github.com/users/saiden89/orgs",
"received_events_url": "https://api.github.com/users/saiden89/received_events",
"repos_url": "https://api.github.com/users/saiden89/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/saiden89/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/saiden89/subscriptions",
"type": "User",
"url": "https://api.github.com/users/saiden89",
"user_view_type": "public"
} | [] | open | false | null | [] | null | 0 | 2024-10-08T09:08:10Z | 2024-10-08T09:08:10Z | null | NONE | null | null | null | ### Describe the bug
As already posted in https://github.com/microsoft/pylance-release/issues/6534, the correct type hinting fails when building a dataset using the `from_parquet` constructor.
Their suggestion is to comprehensively annotate the method's return type to better align with the docstring information.
### Steps to reproduce the bug
```python
from datasets import Dataset
dataset = Dataset.from_parquet(path_or_paths="file")
dataset.map(lambda x: {"new": x["old"]}, batched=True)
```
### Expected behavior
map is a [valid](https://huggingface.co/docs/datasets/v3.0.1/en/package_reference/main_classes#datasets.Dataset.map), no error should be thrown.
### Environment info
- `datasets` version: 3.0.1
- Platform: macOS-15.0.1-arm64-arm-64bit
- Python version: 3.12.6
- `huggingface_hub` version: 0.25.1
- PyArrow version: 17.0.0
- Pandas version: 2.2.3
- `fsspec` version: 2024.6.1 | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7202/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7202/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7201 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7201/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7201/comments | https://api.github.com/repos/huggingface/datasets/issues/7201/events | https://github.com/huggingface/datasets/issues/7201 | 2,569,837,015 | I_kwDODunzps6ZLJnX | 7,201 | `load_dataset()` of images from a single directory where `train.png` image exists | {
"avatar_url": "https://avatars.githubusercontent.com/u/56922146?v=4",
"events_url": "https://api.github.com/users/SagiPolaczek/events{/privacy}",
"followers_url": "https://api.github.com/users/SagiPolaczek/followers",
"following_url": "https://api.github.com/users/SagiPolaczek/following{/other_user}",
"gists_url": "https://api.github.com/users/SagiPolaczek/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/SagiPolaczek",
"id": 56922146,
"login": "SagiPolaczek",
"node_id": "MDQ6VXNlcjU2OTIyMTQ2",
"organizations_url": "https://api.github.com/users/SagiPolaczek/orgs",
"received_events_url": "https://api.github.com/users/SagiPolaczek/received_events",
"repos_url": "https://api.github.com/users/SagiPolaczek/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/SagiPolaczek/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SagiPolaczek/subscriptions",
"type": "User",
"url": "https://api.github.com/users/SagiPolaczek",
"user_view_type": "public"
} | [] | open | false | null | [] | null | 0 | 2024-10-07T09:14:17Z | 2024-10-07T09:14:17Z | null | NONE | null | null | null | ### Describe the bug
Hey!
Firstly, thanks for maintaining such framework!
I had a small issue, where I wanted to load a custom dataset of image+text captioning. I had all of my images in a single directory, and one of the images had the name `train.png`. Then, the loaded dataset had only this image.
I guess it's related to "train" as a split name, but it's definitely an unexpected behavior :)
Unfortunately I don't have time to submit a proper PR. I'm attaching a toy example to reproduce the issue.
Thanks,
Sagi
### Steps to reproduce the bug
All of the steps I'm attaching are in a fresh env :)
```
(base) sagipolaczek@Sagis-MacBook-Pro ~ % conda activate hf_issue_env
(hf_issue_env) sagipolaczek@Sagis-MacBook-Pro ~ % python --version
Python 3.10.15
(hf_issue_env) sagipolaczek@Sagis-MacBook-Pro ~ % pip list | grep datasets
datasets 3.0.1
(hf_issue_env) sagipolaczek@Sagis-MacBook-Pro ~ % ls -la Documents/hf_datasets_issue
total 352
drwxr-xr-x 6 sagipolaczek staff 192 Oct 7 11:59 .
drwx------@ 23 sagipolaczek staff 736 Oct 7 11:46 ..
-rw-r--r--@ 1 sagipolaczek staff 72 Oct 7 11:59 metadata.csv
-rw-r--r--@ 1 sagipolaczek staff 160154 Oct 6 18:00 pika.png
-rw-r--r--@ 1 sagipolaczek staff 5495 Oct 6 12:02 pika_pika.png
-rw-r--r--@ 1 sagipolaczek staff 1753 Oct 6 11:50 train.png
(hf_issue_env) sagipolaczek@Sagis-MacBook-Pro ~ % cat Documents/hf_datasets_issue/metadata.csv
file_name,text
train.png,A train
pika.png,Pika
pika_pika.png,Pika Pika!
(hf_issue_env) sagipolaczek@Sagis-MacBook-Pro ~ % python
Python 3.10.15 (main, Oct 3 2024, 02:33:33) [Clang 14.0.6 ] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> from datasets import load_dataset
>>> dataset = load_dataset("imagefolder", data_dir="Documents/hf_datasets_issue/")
>>> dataset
DatasetDict({
train: Dataset({
features: ['image', 'text'],
num_rows: 1
})
})
>>> dataset["train"][0]
{'image': <PIL.PngImagePlugin.PngImageFile image mode=RGBA size=354x84 at 0x10B50FD90>, 'text': 'A train'}
### DELETING `train.png` sample ###
(hf_issue_env) sagipolaczek@Sagis-MacBook-Pro ~ % vim Documents/hf_datasets_issue/metadata.csv
(hf_issue_env) sagipolaczek@Sagis-MacBook-Pro ~ % rm Documents/hf_datasets_issue/train.png
(hf_issue_env) sagipolaczek@Sagis-MacBook-Pro ~ % python
Python 3.10.15 (main, Oct 3 2024, 02:33:33) [Clang 14.0.6 ] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> from datasets import load_dataset
>>> dataset = load_dataset("imagefolder", data_dir="Documents/hf_datasets_issue/")
Generating train split: 2 examples [00:00, 65.99 examples/s]
>>> dataset
DatasetDict({
train: Dataset({
features: ['image', 'text'],
num_rows: 2
})
})
>>> dataset["train"]
Dataset({
features: ['image', 'text'],
num_rows: 2
})
>>> dataset["train"][0],dataset["train"][1]
({'image': <PIL.PngImagePlugin.PngImageFile image mode=RGBA size=2356x1054 at 0x10DD11E70>, 'text': 'Pika'}, {'image': <PIL.PngImagePlugin.PngImageFile image mode=RGBA size=343x154 at 0x10E258C70>, 'text': 'Pika Pika!'})
```
### Expected behavior
My expected behavior would be to get a dataset with the sample `train.png` in it (along with the others data points).
### Environment info
I've attached it in the example:
Python 3.10.15
datasets 3.0.1
| null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7201/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7201/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7200 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7200/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7200/comments | https://api.github.com/repos/huggingface/datasets/issues/7200/events | https://github.com/huggingface/datasets/pull/7200 | 2,567,921,694 | PR_kwDODunzps59sgRd | 7,200 | Fix the environment variable for huggingface cache | {
"avatar_url": "https://avatars.githubusercontent.com/u/989899?v=4",
"events_url": "https://api.github.com/users/torotoki/events{/privacy}",
"followers_url": "https://api.github.com/users/torotoki/followers",
"following_url": "https://api.github.com/users/torotoki/following{/other_user}",
"gists_url": "https://api.github.com/users/torotoki/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/torotoki",
"id": 989899,
"login": "torotoki",
"node_id": "MDQ6VXNlcjk4OTg5OQ==",
"organizations_url": "https://api.github.com/users/torotoki/orgs",
"received_events_url": "https://api.github.com/users/torotoki/received_events",
"repos_url": "https://api.github.com/users/torotoki/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/torotoki/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/torotoki/subscriptions",
"type": "User",
"url": "https://api.github.com/users/torotoki",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | 3 | 2024-10-05T11:54:35Z | 2024-10-08T15:47:11Z | 2024-10-08T15:45:18Z | CONTRIBUTOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/7200.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7200",
"merged_at": "2024-10-08T15:45:17Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7200.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7200"
} | Resolve #6256. As far as I tested, `HF_DATASETS_CACHE` was ignored and I could not specify the cache directory at all except for the default one by this environment variable. `HF_HOME` has worked. Perhaps the recent change on file downloading by `huggingface_hub` could affect this bug.
In my testing, I could not specify the cache directory even by `load_dataset("dataset_name" cache_dir="...")`. It might be another issue. I also welcome any advice to solve this issue. | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7200/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7200/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/7199 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7199/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7199/comments | https://api.github.com/repos/huggingface/datasets/issues/7199/events | https://github.com/huggingface/datasets/pull/7199 | 2,566,788,225 | PR_kwDODunzps59pN_M | 7,199 | Add with_rank to Dataset.from_generator | {
"avatar_url": "https://avatars.githubusercontent.com/u/17828087?v=4",
"events_url": "https://api.github.com/users/muthissar/events{/privacy}",
"followers_url": "https://api.github.com/users/muthissar/followers",
"following_url": "https://api.github.com/users/muthissar/following{/other_user}",
"gists_url": "https://api.github.com/users/muthissar/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/muthissar",
"id": 17828087,
"login": "muthissar",
"node_id": "MDQ6VXNlcjE3ODI4MDg3",
"organizations_url": "https://api.github.com/users/muthissar/orgs",
"received_events_url": "https://api.github.com/users/muthissar/received_events",
"repos_url": "https://api.github.com/users/muthissar/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/muthissar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muthissar/subscriptions",
"type": "User",
"url": "https://api.github.com/users/muthissar",
"user_view_type": "public"
} | [] | open | false | null | [] | null | 0 | 2024-10-04T16:51:53Z | 2024-10-04T16:51:53Z | null | NONE | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/7199.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7199",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/7199.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7199"
} | Adds `with_rank` to `Dataset.from_generator`. As for `Dataset.map` and `Dataset.filter`, this is useful when creating cache files using multi-GPU. | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7199/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7199/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/7198 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7198/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7198/comments | https://api.github.com/repos/huggingface/datasets/issues/7198/events | https://github.com/huggingface/datasets/pull/7198 | 2,566,064,849 | PR_kwDODunzps59mwgu | 7,198 | Add repeat method to datasets | {
"avatar_url": "https://avatars.githubusercontent.com/u/5719745?v=4",
"events_url": "https://api.github.com/users/alex-hh/events{/privacy}",
"followers_url": "https://api.github.com/users/alex-hh/followers",
"following_url": "https://api.github.com/users/alex-hh/following{/other_user}",
"gists_url": "https://api.github.com/users/alex-hh/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/alex-hh",
"id": 5719745,
"login": "alex-hh",
"node_id": "MDQ6VXNlcjU3MTk3NDU=",
"organizations_url": "https://api.github.com/users/alex-hh/orgs",
"received_events_url": "https://api.github.com/users/alex-hh/received_events",
"repos_url": "https://api.github.com/users/alex-hh/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/alex-hh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alex-hh/subscriptions",
"type": "User",
"url": "https://api.github.com/users/alex-hh",
"user_view_type": "public"
} | [] | open | false | null | [] | null | 0 | 2024-10-04T10:45:16Z | 2024-10-04T10:49:21Z | null | CONTRIBUTOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/7198.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7198",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/7198.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7198"
} | Following up on discussion in #6623 and #7198 I thought this would be pretty useful for my case so had a go at implementing.
My main motivation is to be able to call iterable_dataset.repeat(None).take(samples_per_epoch) to safely avoid timeout issues in a distributed training setting. This would provide a straightforward workaround for several open issues related to this situation: https://github.com/huggingface/datasets/issues/6437, https://github.com/huggingface/datasets/issues/6594, https://github.com/huggingface/datasets/issues/6623, https://github.com/huggingface/datasets/issues/6719.
@lhoestq let me know if this looks on the right track! | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7198/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7198/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/7197 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7197/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7197/comments | https://api.github.com/repos/huggingface/datasets/issues/7197/events | https://github.com/huggingface/datasets/issues/7197 | 2,565,924,788 | I_kwDODunzps6Y8Oe0 | 7,197 | ConnectionError: Couldn't reach 'allenai/c4' on the Hub (ConnectionError)数据集下不下来,怎么回事 | {
"avatar_url": "https://avatars.githubusercontent.com/u/114299344?v=4",
"events_url": "https://api.github.com/users/Mrgengli/events{/privacy}",
"followers_url": "https://api.github.com/users/Mrgengli/followers",
"following_url": "https://api.github.com/users/Mrgengli/following{/other_user}",
"gists_url": "https://api.github.com/users/Mrgengli/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Mrgengli",
"id": 114299344,
"login": "Mrgengli",
"node_id": "U_kgDOBtAR0A",
"organizations_url": "https://api.github.com/users/Mrgengli/orgs",
"received_events_url": "https://api.github.com/users/Mrgengli/received_events",
"repos_url": "https://api.github.com/users/Mrgengli/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Mrgengli/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Mrgengli/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Mrgengli",
"user_view_type": "public"
} | [] | open | false | null | [] | null | 1 | 2024-10-04T09:33:25Z | 2024-10-04T09:56:22Z | null | NONE | null | null | null | ### Describe the bug
from datasets import load_dataset
print("11")
traindata = load_dataset('ptb_text_only', 'penn_treebank', split='train')
print("22")
valdata = load_dataset('ptb_text_only',
'penn_treebank',
split='validation')
### Steps to reproduce the bug
1
### Expected behavior
1
### Environment info
1 | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7197/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7197/timeline | null | null | false |
End of preview. Expand
in Data Studio
README.md exists but content is empty.
- Downloads last month
- 7