Datasets:

Modalities:
Image
Text
Formats:
parquet
Languages:
Danish
ArXiv:
DOI:
Libraries:
Datasets
Dask
License:
Kenneth Enevoldsen commited on
Commit
3d87e24
·
unverified ·
1 Parent(s): d06be7c
Files changed (4) hide show
  1. CHANGELOG.md +10 -0
  2. README.md +75 -3
  3. src/dynaword/tables.py +13 -15
  4. test_results.log +1400 -7
CHANGELOG.md CHANGED
@@ -5,6 +5,16 @@ All notable changes to this project will be documented in this file.
5
 
6
  The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
7
 
 
 
 
 
 
 
 
 
 
 
8
  ## [v1.2.5] - 2025-07-08
9
 
10
  ### Added
 
5
 
6
  The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
7
 
8
+ ## [v1.2.6] - 2025-07-21
9
+
10
+ ### Added
11
+
12
+ - Added two table to get an overview of data by license and domain
13
+
14
+ ### Changed
15
+
16
+ - Dataset overview table now appears in a drop down menu
17
+
18
  ## [v1.2.5] - 2025-07-08
19
 
20
  ### Added
README.md CHANGED
@@ -198,7 +198,8 @@ https://github.com/huggingface/datasets/blob/main/templates/README_guide.md
198
  - [Dataset Description](#dataset-description)
199
  - [Dataset Summary](#dataset-summary)
200
  - [Loading the dataset](#loading-the-dataset)
201
- - [Languages:](#languages)
 
202
  - [Dataset Structure](#dataset-structure)
203
  - [Data Instances](#data-instances)
204
  - [Data Fields](#data-fields)
@@ -261,7 +262,7 @@ You can also load a single subset at a time:
261
  ds = load_dataset(name, revision="{desired revision}")
262
  ```
263
 
264
- ### Languages:
265
  This dataset includes the following languages:
266
 
267
  - dan-Latn
@@ -270,6 +271,77 @@ This dataset includes the following languages:
270
 
271
  Language is denoted using [BCP-47](https://en.wikipedia.org/wiki/IETF_language_tag), using the langauge code ISO 639-3 and the script code ISO 15924. The last element denote the region variant.
272
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
273
  ## Dataset Structure
274
 
275
  The dataset contains text from different sources which are thoroughly defined in [Source Data](#source-data).
@@ -433,7 +505,7 @@ Each source is linked to a metadata card with additional information about origi
433
 
434
 
435
  <details>
436
- <summary><b>Main table</b></summary>
437
 
438
  You can learn more about each dataset by pressing the link in the first column.
439
 
 
198
  - [Dataset Description](#dataset-description)
199
  - [Dataset Summary](#dataset-summary)
200
  - [Loading the dataset](#loading-the-dataset)
201
+ - [Languages](#languages)
202
+ - [Domains](#domains)
203
  - [Dataset Structure](#dataset-structure)
204
  - [Data Instances](#data-instances)
205
  - [Data Fields](#data-fields)
 
262
  ds = load_dataset(name, revision="{desired revision}")
263
  ```
264
 
265
+ ### Languages
266
  This dataset includes the following languages:
267
 
268
  - dan-Latn
 
271
 
272
  Language is denoted using [BCP-47](https://en.wikipedia.org/wiki/IETF_language_tag), using the langauge code ISO 639-3 and the script code ISO 15924. The last element denote the region variant.
273
 
274
+
275
+ ### Domains
276
+
277
+ To give a structured overview of the dataset composition, we include three summary tables:
278
+
279
+ - The Domain Table groups the datasets by domain (e.g., legal, books, social media) and shows the total token count for each domain.
280
+ - The License Table categorizes the data by license type, providing transparency into the usage rights associated with each source.
281
+ - The Main Table offers a detailed breakdown of each dataset, including a short description, its assigned domain, token count, and license.
282
+
283
+ Each source is linked to a metadata card with additional information about origin, preprocessing, and license verification.
284
+
285
+
286
+ <!-- START-DOMAIN TABLE -->
287
+ | Domain | Source with link | N. Tokens |
288
+ |:-------------|:---------------------------------------------------------------------------------------------------------|:------------|
289
+ | Legal | [cellar], [eur-lex-sum-da], [fm-udgivelser], [retsinformationdk], [skat], [retspraksis], [domsdatabasen] | 2.32B |
290
+ | Books | [ncc_books], [memo], [adl], [wikibooks], [jvj], [gutenberg], [relig] | 722.00M |
291
+ | Conversation | [danske-taler], [opensubtitles], [ep], [ft], [spont], [naat] | 497.09M |
292
+ | Social Media | [hest] | 389.32M |
293
+ | Other | [ncc_parliament], [dannet], [depbank], [synne] | 340.59M |
294
+ | Web | [ai-aktindsigt], [ncc_maalfrid], [miljoeportalen] | 295.87M |
295
+ | Encyclopedic | [wikisource], [wiki] | 127.35M |
296
+ | News | [ncc_newspaper], [tv2r], [nordjyllandnews] | 60.63M |
297
+ | Medical | [health_hovedstaden] | 27.07M |
298
+ | Readaloud | [nota] | 7.30M |
299
+ | Dialect | [botxt] | 847.97K |
300
+ | **Total** | | 4.78B |
301
+
302
+ [ai-aktindsigt]: data/ai-aktindsigt/ai-aktindsigt.md
303
+ [cellar]: data/cellar/cellar.md
304
+ [danske-taler]: data/danske-taler/danske-taler.md
305
+ [ncc_books]: data/ncc_books/ncc_books.md
306
+ [ncc_newspaper]: data/ncc_newspaper/ncc_newspaper.md
307
+ [ncc_maalfrid]: data/ncc_maalfrid/ncc_maalfrid.md
308
+ [ncc_parliament]: data/ncc_parliament/ncc_parliament.md
309
+ [eur-lex-sum-da]: data/eur-lex-sum-da/eur-lex-sum-da.md
310
+ [miljoeportalen]: data/miljoeportalen/miljoeportalen.md
311
+ [fm-udgivelser]: data/fm-udgivelser/fm-udgivelser.md
312
+ [memo]: data/memo/memo.md
313
+ [opensubtitles]: data/opensubtitles/opensubtitles.md
314
+ [retsinformationdk]: data/retsinformationdk/retsinformationdk.md
315
+ [ep]: data/ep/ep.md
316
+ [ft]: data/ft/ft.md
317
+ [wikisource]: data/wikisource/wikisource.md
318
+ [spont]: data/spont/spont.md
319
+ [tv2r]: data/tv2r/tv2r.md
320
+ [adl]: data/adl/adl.md
321
+ [hest]: data/hest/hest.md
322
+ [skat]: data/skat/skat.md
323
+ [dannet]: data/dannet/dannet.md
324
+ [retspraksis]: data/retspraksis/retspraksis.md
325
+ [wikibooks]: data/wikibooks/wikibooks.md
326
+ [jvj]: data/jvj/jvj.md
327
+ [gutenberg]: data/gutenberg/gutenberg.md
328
+ [botxt]: data/botxt/botxt.md
329
+ [depbank]: data/depbank/depbank.md
330
+ [naat]: data/naat/naat.md
331
+ [synne]: data/synne/synne.md
332
+ [wiki]: data/wiki/wiki.md
333
+ [nordjyllandnews]: data/nordjyllandnews/nordjyllandnews.md
334
+ [relig]: data/relig/relig.md
335
+ [nota]: data/nota/nota.md
336
+ [health_hovedstaden]: data/health_hovedstaden/health_hovedstaden.md
337
+ [domsdatabasen]: data/domsdatabasen/domsdatabasen.md
338
+ <!-- END-DOMAIN TABLE -->
339
+
340
+
341
+ <p align="center">
342
+ <img src="./images/domain_distribution.png" width="400" style="margin-right: 10px;" />
343
+ </p>
344
+
345
  ## Dataset Structure
346
 
347
  The dataset contains text from different sources which are thoroughly defined in [Source Data](#source-data).
 
505
 
506
 
507
  <details>
508
+ <summary><b>Overview Table (click to unfold)</b></summary>
509
 
510
  You can learn more about each dataset by pressing the link in the first column.
511
 
src/dynaword/tables.py CHANGED
@@ -54,7 +54,7 @@ def create_overview_table(
54
  ) -> pd.DataFrame:
55
  table = {
56
  "Source": [],
57
- "Source with link": [],
58
  "Description": [],
59
  "Domain": [],
60
  "N. Tokens": [],
@@ -70,7 +70,7 @@ def create_overview_table(
70
  main_domain = sheet.domains[0] if sheet.domains else ""
71
 
72
  table["Source"] += [f"{dataset_path.name}"]
73
- table["Source with link"] += [f"[{dataset_path.name}]"]
74
  table["License"] += [f"[{sheet.license_name}]"]
75
  table["Domain"] += [main_domain]
76
  table["Description"] += [sheet.short_description]
@@ -82,7 +82,7 @@ def create_overview_table(
82
  if add_total_row:
83
  total_row = {
84
  "Source": "**Total**",
85
- "Source with link": "**Total**",
86
  "Domain": "",
87
  "License": "",
88
  "Description": "",
@@ -96,12 +96,12 @@ def create_overview_table(
96
  ignore_index=True,
97
  )
98
  if add_readme_references:
99
- # replace Source with Source with link
100
- df["Source"] = df["Source with link"]
101
- df = df.drop(columns=["Source with link"])
102
  else:
103
- # remove Source with link
104
- df = df.drop(columns=["Source with link"])
105
 
106
  if add_readable_tokens:
107
  df["N. Tokens"] = df["N. Tokens"].apply(human_readable_large_int)
@@ -116,7 +116,7 @@ def create_grouped_table(
116
  add_total_row: bool = True,
117
  ) -> pd.DataFrame:
118
  table = {
119
- "Source with link": [],
120
  group: [],
121
  "N. Tokens": [],
122
  }
@@ -129,20 +129,18 @@ def create_grouped_table(
129
  desc_stats = sheet.get_descritive_stats()
130
  feature = sheet.get_feature_by_string(group)
131
 
132
- table["Source with link"] += [f"[{dataset_path.name}]"]
133
  table[group] += [feature]
134
  table["N. Tokens"] += [desc_stats.number_of_tokens]
135
 
136
  if add_total_row:
137
- table["Source with link"] += [""]
138
  table[group] += ["**Total**"]
139
  table["N. Tokens"] += [sum(table["N. Tokens"])]
140
 
141
  df = pd.DataFrame.from_dict(table)
142
 
143
- df = df.groupby(group).agg(
144
- {"Source with link": lambda x: ", ".join(x), "N. Tokens": "sum"}
145
- )
146
 
147
  df = df.sort_values("N. Tokens", ascending=False)
148
 
@@ -165,7 +163,7 @@ def create_grouped_table_str(
165
  ) -> str:
166
  table = create_grouped_table(group=group, repo_path=repo_path)
167
  readme_references = create_dataset_readme_references()
168
- package = f"{table.to_markdown(index=False)}\n\n{readme_references}\n\n"
169
  return package
170
 
171
 
 
54
  ) -> pd.DataFrame:
55
  table = {
56
  "Source": [],
57
+ "Sources": [],
58
  "Description": [],
59
  "Domain": [],
60
  "N. Tokens": [],
 
70
  main_domain = sheet.domains[0] if sheet.domains else ""
71
 
72
  table["Source"] += [f"{dataset_path.name}"]
73
+ table["Sources"] += [f"[{dataset_path.name}]"]
74
  table["License"] += [f"[{sheet.license_name}]"]
75
  table["Domain"] += [main_domain]
76
  table["Description"] += [sheet.short_description]
 
82
  if add_total_row:
83
  total_row = {
84
  "Source": "**Total**",
85
+ "Sources": "**Total**",
86
  "Domain": "",
87
  "License": "",
88
  "Description": "",
 
96
  ignore_index=True,
97
  )
98
  if add_readme_references:
99
+ # replace Source with Sources
100
+ df["Source"] = df["Sources"]
101
+ df = df.drop(columns=["Sources"])
102
  else:
103
+ # remove Sources
104
+ df = df.drop(columns=["Sources"])
105
 
106
  if add_readable_tokens:
107
  df["N. Tokens"] = df["N. Tokens"].apply(human_readable_large_int)
 
116
  add_total_row: bool = True,
117
  ) -> pd.DataFrame:
118
  table = {
119
+ "Sources": [],
120
  group: [],
121
  "N. Tokens": [],
122
  }
 
129
  desc_stats = sheet.get_descritive_stats()
130
  feature = sheet.get_feature_by_string(group)
131
 
132
+ table["Sources"] += [f"[{dataset_path.name}]"]
133
  table[group] += [feature]
134
  table["N. Tokens"] += [desc_stats.number_of_tokens]
135
 
136
  if add_total_row:
137
+ table["Sources"] += [""]
138
  table[group] += ["**Total**"]
139
  table["N. Tokens"] += [sum(table["N. Tokens"])]
140
 
141
  df = pd.DataFrame.from_dict(table)
142
 
143
+ df = df.groupby(group).agg({"Sources": lambda x: ", ".join(x), "N. Tokens": "sum"})
 
 
144
 
145
  df = df.sort_values("N. Tokens", ascending=False)
146
 
 
163
  ) -> str:
164
  table = create_grouped_table(group=group, repo_path=repo_path)
165
  readme_references = create_dataset_readme_references()
166
+ package = f"{table.to_markdown(index=False, maxcolwidths=[None, 20, None])}\n\n{readme_references}\n\n"
167
  return package
168
 
169
 
test_results.log CHANGED
@@ -1,6 +1,6 @@
1
  ============================= test session starts ==============================
2
  platform darwin -- Python 3.12.0, pytest-8.3.4, pluggy-1.5.0
3
- rootdir: /Users/kristianjensen/Documents/danish-dynaword
4
  configfile: pyproject.toml
5
  plugins: anyio-4.9.0
6
  collected 328 items
@@ -11,15 +11,1408 @@ src/tests/test_datasheets.py ........................................... [ 35%]
11
  ........................................................................ [ 57%]
12
  ................................................................. [ 76%]
13
  src/tests/test_load.py .. [ 77%]
14
- src/tests/test_quality/test_duplicates.py .............................. [ 86%]
15
  ......s [ 88%]
16
- src/tests/test_quality/test_short_texts.py ............................. [ 97%]
17
  ....... [ 99%]
18
- src/tests/test_unique_ids.py . [100%]
19
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
20
  =============================== warnings summary ===============================
21
- src/tests/test_quality/test_short_texts.py: 36 warnings
22
- /Users/kristianjensen/Documents/danish-dynaword/.venv/lib/python3.12/site-packages/datasets/utils/_dill.py:385: DeprecationWarning: co_lnotab is deprecated, use co_lines instead.
23
 
24
  -- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
25
- ================= 327 passed, 1 skipped, 36 warnings in 52.74s =================
 
 
 
 
 
 
 
 
 
 
1
  ============================= test session starts ==============================
2
  platform darwin -- Python 3.12.0, pytest-8.3.4, pluggy-1.5.0
3
+ rootdir: /Users/au561649/Github/danish-dynaword
4
  configfile: pyproject.toml
5
  plugins: anyio-4.9.0
6
  collected 328 items
 
11
  ........................................................................ [ 57%]
12
  ................................................................. [ 76%]
13
  src/tests/test_load.py .. [ 77%]
14
+ src/tests/test_quality/test_duplicates.py .............FF..F.F.......... [ 86%]
15
  ......s [ 88%]
16
+ src/tests/test_quality/test_short_texts.py .............FF....F......... [ 97%]
17
  ....... [ 99%]
18
+ src/tests/test_unique_ids.py F [100%]
19
 
20
+ =================================== FAILURES ===================================
21
+ ______________________ test_no_within_data_duplicates[ep] ______________________
22
+
23
+ self = <datasets.packaged_modules.parquet.parquet.ParquetDanish-dynaword object at 0x118b3e240>
24
+ gen_kwargs = {'files': tracked_list(current=FilesIterable(current=/Users/au561649/Github/danish-dynaword/data/ep/ep.parquet))}
25
+ fpath = '/Users/au561649/.cache/huggingface/datasets/danish-dynaword/ep/0.0.0/5055500453bef830.incomplete/danish-dynaword-train-JJJJJ-SSSSS-of-NNNNN.arrow'
26
+ file_format = 'arrow', max_shard_size = 500000000, job_id = 0
27
+
28
+ def _prepare_split_single(
29
+ self, gen_kwargs: dict, fpath: str, file_format: str, max_shard_size: int, job_id: int
30
+ ) -> Iterable[Tuple[int, bool, Union[int, tuple]]]:
31
+ gen_kwargs = {k: tracked_list(v) if isinstance(v, list) else v for k, v in gen_kwargs.items()}
32
+ generator = self._generate_tables(**gen_kwargs)
33
+ writer_class = ParquetWriter if file_format == "parquet" else ArrowWriter
34
+ embed_local_files = file_format == "parquet"
35
+ shard_lengths = []
36
+ total_num_examples, total_num_bytes = 0, 0
37
+
38
+ shard_id = 0
39
+ num_examples_progress_update = 0
40
+ try:
41
+ writer = writer_class(
42
+ features=self.info.features,
43
+ path=fpath.replace("SSSSS", f"{shard_id:05d}").replace("JJJJJ", f"{job_id:05d}"),
44
+ writer_batch_size=self._writer_batch_size,
45
+ storage_options=self._fs.storage_options,
46
+ embed_local_files=embed_local_files,
47
+ )
48
+ try:
49
+ _time = time.time()
50
+ for _, table in generator:
51
+ if max_shard_size is not None and writer._num_bytes > max_shard_size:
52
+ num_examples, num_bytes = writer.finalize()
53
+ writer.close()
54
+ shard_lengths.append(num_examples)
55
+ total_num_examples += num_examples
56
+ total_num_bytes += num_bytes
57
+ shard_id += 1
58
+ writer = writer_class(
59
+ features=writer._features,
60
+ path=fpath.replace("SSSSS", f"{shard_id:05d}").replace("JJJJJ", f"{job_id:05d}"),
61
+ writer_batch_size=self._writer_batch_size,
62
+ storage_options=self._fs.storage_options,
63
+ embed_local_files=embed_local_files,
64
+ )
65
+ try:
66
+ > writer.write_table(table)
67
+
68
+ .venv/lib/python3.12/site-packages/datasets/builder.py:1870:
69
+ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
70
+ .venv/lib/python3.12/site-packages/datasets/arrow_writer.py:627: in write_table
71
+ self.pa_writer.write_table(pa_table, writer_batch_size)
72
+ pyarrow/ipc.pxi:529: in pyarrow.lib._CRecordBatchWriter.write_table
73
+ ???
74
+ pyarrow/error.pxi:89: in pyarrow.lib.check_status
75
+ ???
76
+ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
77
+
78
+ self = <fsspec.implementations.local.LocalFileOpener object at 0x114a4bfa0>
79
+ args = (<pyarrow.Buffer address=0x5ddec020000 size=75246719 is_cpu=True is_mutable=True>,)
80
+ kwargs = {}
81
+
82
+ def write(self, *args, **kwargs):
83
+ > return self.f.write(*args, **kwargs)
84
+ E OSError: [Errno 28] No space left on device
85
+
86
+ .venv/lib/python3.12/site-packages/fsspec/implementations/local.py:426: OSError
87
+
88
+ The above exception was the direct cause of the following exception:
89
+
90
+ dataset_name = 'ep'
91
+
92
+ @pytest.mark.parametrize("dataset_name", DATASET_NAMES)
93
+ def test_no_within_data_duplicates(dataset_name: str):
94
+ > ds = load_dataset(str(repo_path.resolve()), dataset_name, split="train")
95
+
96
+ src/tests/test_quality/test_duplicates.py:12:
97
+ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
98
+ .venv/lib/python3.12/site-packages/datasets/load.py:2151: in load_dataset
99
+ builder_instance.download_and_prepare(
100
+ .venv/lib/python3.12/site-packages/datasets/builder.py:924: in download_and_prepare
101
+ self._download_and_prepare(
102
+ .venv/lib/python3.12/site-packages/datasets/builder.py:1000: in _download_and_prepare
103
+ self._prepare_split(split_generator, **prepare_split_kwargs)
104
+ .venv/lib/python3.12/site-packages/datasets/builder.py:1741: in _prepare_split
105
+ for job_id, done, content in self._prepare_split_single(
106
+ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
107
+
108
+ self = <datasets.packaged_modules.parquet.parquet.ParquetDanish-dynaword object at 0x118b3e240>
109
+ gen_kwargs = {'files': tracked_list(current=FilesIterable(current=/Users/au561649/Github/danish-dynaword/data/ep/ep.parquet))}
110
+ fpath = '/Users/au561649/.cache/huggingface/datasets/danish-dynaword/ep/0.0.0/5055500453bef830.incomplete/danish-dynaword-train-JJJJJ-SSSSS-of-NNNNN.arrow'
111
+ file_format = 'arrow', max_shard_size = 500000000, job_id = 0
112
+
113
+ def _prepare_split_single(
114
+ self, gen_kwargs: dict, fpath: str, file_format: str, max_shard_size: int, job_id: int
115
+ ) -> Iterable[Tuple[int, bool, Union[int, tuple]]]:
116
+ gen_kwargs = {k: tracked_list(v) if isinstance(v, list) else v for k, v in gen_kwargs.items()}
117
+ generator = self._generate_tables(**gen_kwargs)
118
+ writer_class = ParquetWriter if file_format == "parquet" else ArrowWriter
119
+ embed_local_files = file_format == "parquet"
120
+ shard_lengths = []
121
+ total_num_examples, total_num_bytes = 0, 0
122
+
123
+ shard_id = 0
124
+ num_examples_progress_update = 0
125
+ try:
126
+ writer = writer_class(
127
+ features=self.info.features,
128
+ path=fpath.replace("SSSSS", f"{shard_id:05d}").replace("JJJJJ", f"{job_id:05d}"),
129
+ writer_batch_size=self._writer_batch_size,
130
+ storage_options=self._fs.storage_options,
131
+ embed_local_files=embed_local_files,
132
+ )
133
+ try:
134
+ _time = time.time()
135
+ for _, table in generator:
136
+ if max_shard_size is not None and writer._num_bytes > max_shard_size:
137
+ num_examples, num_bytes = writer.finalize()
138
+ writer.close()
139
+ shard_lengths.append(num_examples)
140
+ total_num_examples += num_examples
141
+ total_num_bytes += num_bytes
142
+ shard_id += 1
143
+ writer = writer_class(
144
+ features=writer._features,
145
+ path=fpath.replace("SSSSS", f"{shard_id:05d}").replace("JJJJJ", f"{job_id:05d}"),
146
+ writer_batch_size=self._writer_batch_size,
147
+ storage_options=self._fs.storage_options,
148
+ embed_local_files=embed_local_files,
149
+ )
150
+ try:
151
+ writer.write_table(table)
152
+ except CastError as cast_error:
153
+ raise DatasetGenerationCastError.from_cast_error(
154
+ cast_error=cast_error,
155
+ builder_name=self.info.builder_name,
156
+ gen_kwargs=gen_kwargs,
157
+ token=self.token,
158
+ )
159
+ num_examples_progress_update += len(table)
160
+ if time.time() > _time + config.PBAR_REFRESH_TIME_INTERVAL:
161
+ _time = time.time()
162
+ yield job_id, False, num_examples_progress_update
163
+ num_examples_progress_update = 0
164
+ finally:
165
+ yield job_id, False, num_examples_progress_update
166
+ num_shards = shard_id + 1
167
+ num_examples, num_bytes = writer.finalize()
168
+ writer.close()
169
+ shard_lengths.append(num_examples)
170
+ total_num_examples += num_examples
171
+ total_num_bytes += num_bytes
172
+ except Exception as e:
173
+ # Ignore the writer's error for no examples written to the file if this error was caused by the error in _generate_examples before the first example was yielded
174
+ if isinstance(e, SchemaInferenceError) and e.__context__ is not None:
175
+ e = e.__context__
176
+ if isinstance(e, DatasetGenerationError):
177
+ raise
178
+ > raise DatasetGenerationError("An error occurred while generating the dataset") from e
179
+ E datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset
180
+
181
+ .venv/lib/python3.12/site-packages/datasets/builder.py:1897: DatasetGenerationError
182
+ ----------------------------- Captured stderr call -----------------------------
183
+
184
+ ______________________ test_no_within_data_duplicates[ft] ______________________
185
+
186
+ self = <datasets.packaged_modules.parquet.parquet.ParquetDanish-dynaword object at 0x11137ed80>
187
+ gen_kwargs = {'files': tracked_list(current=FilesIterable(current=/Users/au561649/Github/danish-dynaword/data/ft/ft.parquet))}
188
+ fpath = '/Users/au561649/.cache/huggingface/datasets/danish-dynaword/ft/0.0.0/5055500453bef830.incomplete/danish-dynaword-train-JJJJJ-SSSSS-of-NNNNN.arrow'
189
+ file_format = 'arrow', max_shard_size = 500000000, job_id = 0
190
+
191
+ def _prepare_split_single(
192
+ self, gen_kwargs: dict, fpath: str, file_format: str, max_shard_size: int, job_id: int
193
+ ) -> Iterable[Tuple[int, bool, Union[int, tuple]]]:
194
+ gen_kwargs = {k: tracked_list(v) if isinstance(v, list) else v for k, v in gen_kwargs.items()}
195
+ generator = self._generate_tables(**gen_kwargs)
196
+ writer_class = ParquetWriter if file_format == "parquet" else ArrowWriter
197
+ embed_local_files = file_format == "parquet"
198
+ shard_lengths = []
199
+ total_num_examples, total_num_bytes = 0, 0
200
+
201
+ shard_id = 0
202
+ num_examples_progress_update = 0
203
+ try:
204
+ writer = writer_class(
205
+ features=self.info.features,
206
+ path=fpath.replace("SSSSS", f"{shard_id:05d}").replace("JJJJJ", f"{job_id:05d}"),
207
+ writer_batch_size=self._writer_batch_size,
208
+ storage_options=self._fs.storage_options,
209
+ embed_local_files=embed_local_files,
210
+ )
211
+ try:
212
+ _time = time.time()
213
+ for _, table in generator:
214
+ if max_shard_size is not None and writer._num_bytes > max_shard_size:
215
+ num_examples, num_bytes = writer.finalize()
216
+ writer.close()
217
+ shard_lengths.append(num_examples)
218
+ total_num_examples += num_examples
219
+ total_num_bytes += num_bytes
220
+ shard_id += 1
221
+ writer = writer_class(
222
+ features=writer._features,
223
+ path=fpath.replace("SSSSS", f"{shard_id:05d}").replace("JJJJJ", f"{job_id:05d}"),
224
+ writer_batch_size=self._writer_batch_size,
225
+ storage_options=self._fs.storage_options,
226
+ embed_local_files=embed_local_files,
227
+ )
228
+ try:
229
+ > writer.write_table(table)
230
+
231
+ .venv/lib/python3.12/site-packages/datasets/builder.py:1870:
232
+ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
233
+ .venv/lib/python3.12/site-packages/datasets/arrow_writer.py:627: in write_table
234
+ self.pa_writer.write_table(pa_table, writer_batch_size)
235
+ pyarrow/ipc.pxi:529: in pyarrow.lib._CRecordBatchWriter.write_table
236
+ ???
237
+ pyarrow/error.pxi:89: in pyarrow.lib.check_status
238
+ ???
239
+ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
240
+
241
+ self = <fsspec.implementations.local.LocalFileOpener object at 0x1137dd150>
242
+ args = (<pyarrow.Buffer address=0x5de9c020000 size=274397630 is_cpu=True is_mutable=True>,)
243
+ kwargs = {}
244
+
245
+ def write(self, *args, **kwargs):
246
+ > return self.f.write(*args, **kwargs)
247
+ E OSError: [Errno 28] No space left on device
248
+
249
+ .venv/lib/python3.12/site-packages/fsspec/implementations/local.py:426: OSError
250
+
251
+ The above exception was the direct cause of the following exception:
252
+
253
+ dataset_name = 'ft'
254
+
255
+ @pytest.mark.parametrize("dataset_name", DATASET_NAMES)
256
+ def test_no_within_data_duplicates(dataset_name: str):
257
+ > ds = load_dataset(str(repo_path.resolve()), dataset_name, split="train")
258
+
259
+ src/tests/test_quality/test_duplicates.py:12:
260
+ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
261
+ .venv/lib/python3.12/site-packages/datasets/load.py:2151: in load_dataset
262
+ builder_instance.download_and_prepare(
263
+ .venv/lib/python3.12/site-packages/datasets/builder.py:924: in download_and_prepare
264
+ self._download_and_prepare(
265
+ .venv/lib/python3.12/site-packages/datasets/builder.py:1000: in _download_and_prepare
266
+ self._prepare_split(split_generator, **prepare_split_kwargs)
267
+ .venv/lib/python3.12/site-packages/datasets/builder.py:1741: in _prepare_split
268
+ for job_id, done, content in self._prepare_split_single(
269
+ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
270
+
271
+ self = <datasets.packaged_modules.parquet.parquet.ParquetDanish-dynaword object at 0x11137ed80>
272
+ gen_kwargs = {'files': tracked_list(current=FilesIterable(current=/Users/au561649/Github/danish-dynaword/data/ft/ft.parquet))}
273
+ fpath = '/Users/au561649/.cache/huggingface/datasets/danish-dynaword/ft/0.0.0/5055500453bef830.incomplete/danish-dynaword-train-JJJJJ-SSSSS-of-NNNNN.arrow'
274
+ file_format = 'arrow', max_shard_size = 500000000, job_id = 0
275
+
276
+ def _prepare_split_single(
277
+ self, gen_kwargs: dict, fpath: str, file_format: str, max_shard_size: int, job_id: int
278
+ ) -> Iterable[Tuple[int, bool, Union[int, tuple]]]:
279
+ gen_kwargs = {k: tracked_list(v) if isinstance(v, list) else v for k, v in gen_kwargs.items()}
280
+ generator = self._generate_tables(**gen_kwargs)
281
+ writer_class = ParquetWriter if file_format == "parquet" else ArrowWriter
282
+ embed_local_files = file_format == "parquet"
283
+ shard_lengths = []
284
+ total_num_examples, total_num_bytes = 0, 0
285
+
286
+ shard_id = 0
287
+ num_examples_progress_update = 0
288
+ try:
289
+ writer = writer_class(
290
+ features=self.info.features,
291
+ path=fpath.replace("SSSSS", f"{shard_id:05d}").replace("JJJJJ", f"{job_id:05d}"),
292
+ writer_batch_size=self._writer_batch_size,
293
+ storage_options=self._fs.storage_options,
294
+ embed_local_files=embed_local_files,
295
+ )
296
+ try:
297
+ _time = time.time()
298
+ for _, table in generator:
299
+ if max_shard_size is not None and writer._num_bytes > max_shard_size:
300
+ num_examples, num_bytes = writer.finalize()
301
+ writer.close()
302
+ shard_lengths.append(num_examples)
303
+ total_num_examples += num_examples
304
+ total_num_bytes += num_bytes
305
+ shard_id += 1
306
+ writer = writer_class(
307
+ features=writer._features,
308
+ path=fpath.replace("SSSSS", f"{shard_id:05d}").replace("JJJJJ", f"{job_id:05d}"),
309
+ writer_batch_size=self._writer_batch_size,
310
+ storage_options=self._fs.storage_options,
311
+ embed_local_files=embed_local_files,
312
+ )
313
+ try:
314
+ writer.write_table(table)
315
+ except CastError as cast_error:
316
+ raise DatasetGenerationCastError.from_cast_error(
317
+ cast_error=cast_error,
318
+ builder_name=self.info.builder_name,
319
+ gen_kwargs=gen_kwargs,
320
+ token=self.token,
321
+ )
322
+ num_examples_progress_update += len(table)
323
+ if time.time() > _time + config.PBAR_REFRESH_TIME_INTERVAL:
324
+ _time = time.time()
325
+ yield job_id, False, num_examples_progress_update
326
+ num_examples_progress_update = 0
327
+ finally:
328
+ yield job_id, False, num_examples_progress_update
329
+ num_shards = shard_id + 1
330
+ num_examples, num_bytes = writer.finalize()
331
+ writer.close()
332
+ shard_lengths.append(num_examples)
333
+ total_num_examples += num_examples
334
+ total_num_bytes += num_bytes
335
+ except Exception as e:
336
+ # Ignore the writer's error for no examples written to the file if this error was caused by the error in _generate_examples before the first example was yielded
337
+ if isinstance(e, SchemaInferenceError) and e.__context__ is not None:
338
+ e = e.__context__
339
+ if isinstance(e, DatasetGenerationError):
340
+ raise
341
+ > raise DatasetGenerationError("An error occurred while generating the dataset") from e
342
+ E datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset
343
+
344
+ .venv/lib/python3.12/site-packages/datasets/builder.py:1897: DatasetGenerationError
345
+ ----------------------------- Captured stderr call -----------------------------
346
+
347
+ _____________________ test_no_within_data_duplicates[tv2r] _____________________
348
+
349
+ self = <datasets.packaged_modules.parquet.parquet.ParquetDanish-dynaword object at 0x114c07bc0>
350
+ gen_kwargs = {'files': tracked_list(current=FilesIterable(current=/Users/au561649/Github/danish-dynaword/data/tv2r/tv2r.parquet))}
351
+ fpath = '/Users/au561649/.cache/huggingface/datasets/danish-dynaword/tv2r/0.0.0/5055500453bef830.incomplete/danish-dynaword-train-JJJJJ-SSSSS-of-NNNNN.arrow'
352
+ file_format = 'arrow', max_shard_size = 500000000, job_id = 0
353
+
354
+ def _prepare_split_single(
355
+ self, gen_kwargs: dict, fpath: str, file_format: str, max_shard_size: int, job_id: int
356
+ ) -> Iterable[Tuple[int, bool, Union[int, tuple]]]:
357
+ gen_kwargs = {k: tracked_list(v) if isinstance(v, list) else v for k, v in gen_kwargs.items()}
358
+ generator = self._generate_tables(**gen_kwargs)
359
+ writer_class = ParquetWriter if file_format == "parquet" else ArrowWriter
360
+ embed_local_files = file_format == "parquet"
361
+ shard_lengths = []
362
+ total_num_examples, total_num_bytes = 0, 0
363
+
364
+ shard_id = 0
365
+ num_examples_progress_update = 0
366
+ try:
367
+ writer = writer_class(
368
+ features=self.info.features,
369
+ path=fpath.replace("SSSSS", f"{shard_id:05d}").replace("JJJJJ", f"{job_id:05d}"),
370
+ writer_batch_size=self._writer_batch_size,
371
+ storage_options=self._fs.storage_options,
372
+ embed_local_files=embed_local_files,
373
+ )
374
+ try:
375
+ _time = time.time()
376
+ for _, table in generator:
377
+ if max_shard_size is not None and writer._num_bytes > max_shard_size:
378
+ num_examples, num_bytes = writer.finalize()
379
+ writer.close()
380
+ shard_lengths.append(num_examples)
381
+ total_num_examples += num_examples
382
+ total_num_bytes += num_bytes
383
+ shard_id += 1
384
+ writer = writer_class(
385
+ features=writer._features,
386
+ path=fpath.replace("SSSSS", f"{shard_id:05d}").replace("JJJJJ", f"{job_id:05d}"),
387
+ writer_batch_size=self._writer_batch_size,
388
+ storage_options=self._fs.storage_options,
389
+ embed_local_files=embed_local_files,
390
+ )
391
+ try:
392
+ > writer.write_table(table)
393
+
394
+ .venv/lib/python3.12/site-packages/datasets/builder.py:1870:
395
+ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
396
+ .venv/lib/python3.12/site-packages/datasets/arrow_writer.py:627: in write_table
397
+ self.pa_writer.write_table(pa_table, writer_batch_size)
398
+ pyarrow/ipc.pxi:529: in pyarrow.lib._CRecordBatchWriter.write_table
399
+ ???
400
+ pyarrow/error.pxi:89: in pyarrow.lib.check_status
401
+ ???
402
+ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
403
+
404
+ self = <fsspec.implementations.local.LocalFileOpener object at 0x11379d9f0>
405
+ args = (<pyarrow.Buffer address=0x5cf2c0d0000 size=4000 is_cpu=True is_mutable=True>,)
406
+ kwargs = {}
407
+
408
+ def write(self, *args, **kwargs):
409
+ > return self.f.write(*args, **kwargs)
410
+ E OSError: [Errno 28] No space left on device
411
+
412
+ .venv/lib/python3.12/site-packages/fsspec/implementations/local.py:426: OSError
413
+
414
+ During handling of the above exception, another exception occurred:
415
+
416
+ self = <datasets.packaged_modules.parquet.parquet.ParquetDanish-dynaword object at 0x114c07bc0>
417
+ gen_kwargs = {'files': tracked_list(current=FilesIterable(current=/Users/au561649/Github/danish-dynaword/data/tv2r/tv2r.parquet))}
418
+ fpath = '/Users/au561649/.cache/huggingface/datasets/danish-dynaword/tv2r/0.0.0/5055500453bef830.incomplete/danish-dynaword-train-JJJJJ-SSSSS-of-NNNNN.arrow'
419
+ file_format = 'arrow', max_shard_size = 500000000, job_id = 0
420
+
421
+ def _prepare_split_single(
422
+ self, gen_kwargs: dict, fpath: str, file_format: str, max_shard_size: int, job_id: int
423
+ ) -> Iterable[Tuple[int, bool, Union[int, tuple]]]:
424
+ gen_kwargs = {k: tracked_list(v) if isinstance(v, list) else v for k, v in gen_kwargs.items()}
425
+ generator = self._generate_tables(**gen_kwargs)
426
+ writer_class = ParquetWriter if file_format == "parquet" else ArrowWriter
427
+ embed_local_files = file_format == "parquet"
428
+ shard_lengths = []
429
+ total_num_examples, total_num_bytes = 0, 0
430
+
431
+ shard_id = 0
432
+ num_examples_progress_update = 0
433
+ try:
434
+ writer = writer_class(
435
+ features=self.info.features,
436
+ path=fpath.replace("SSSSS", f"{shard_id:05d}").replace("JJJJJ", f"{job_id:05d}"),
437
+ writer_batch_size=self._writer_batch_size,
438
+ storage_options=self._fs.storage_options,
439
+ embed_local_files=embed_local_files,
440
+ )
441
+ try:
442
+ _time = time.time()
443
+ for _, table in generator:
444
+ if max_shard_size is not None and writer._num_bytes > max_shard_size:
445
+ num_examples, num_bytes = writer.finalize()
446
+ writer.close()
447
+ shard_lengths.append(num_examples)
448
+ total_num_examples += num_examples
449
+ total_num_bytes += num_bytes
450
+ shard_id += 1
451
+ writer = writer_class(
452
+ features=writer._features,
453
+ path=fpath.replace("SSSSS", f"{shard_id:05d}").replace("JJJJJ", f"{job_id:05d}"),
454
+ writer_batch_size=self._writer_batch_size,
455
+ storage_options=self._fs.storage_options,
456
+ embed_local_files=embed_local_files,
457
+ )
458
+ try:
459
+ writer.write_table(table)
460
+ except CastError as cast_error:
461
+ raise DatasetGenerationCastError.from_cast_error(
462
+ cast_error=cast_error,
463
+ builder_name=self.info.builder_name,
464
+ gen_kwargs=gen_kwargs,
465
+ token=self.token,
466
+ )
467
+ num_examples_progress_update += len(table)
468
+ if time.time() > _time + config.PBAR_REFRESH_TIME_INTERVAL:
469
+ _time = time.time()
470
+ yield job_id, False, num_examples_progress_update
471
+ num_examples_progress_update = 0
472
+ finally:
473
+ yield job_id, False, num_examples_progress_update
474
+ num_shards = shard_id + 1
475
+ > num_examples, num_bytes = writer.finalize()
476
+
477
+ .venv/lib/python3.12/site-packages/datasets/builder.py:1886:
478
+ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
479
+ .venv/lib/python3.12/site-packages/datasets/arrow_writer.py:644: in finalize
480
+ self.stream.close()
481
+ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
482
+
483
+ self = <fsspec.implementations.local.LocalFileOpener object at 0x11379d9f0>
484
+
485
+ def close(self):
486
+ > return self.f.close()
487
+ E OSError: [Errno 28] No space left on device
488
+
489
+ .venv/lib/python3.12/site-packages/fsspec/implementations/local.py:444: OSError
490
+
491
+ The above exception was the direct cause of the following exception:
492
+
493
+ dataset_name = 'tv2r'
494
+
495
+ @pytest.mark.parametrize("dataset_name", DATASET_NAMES)
496
+ def test_no_within_data_duplicates(dataset_name: str):
497
+ > ds = load_dataset(str(repo_path.resolve()), dataset_name, split="train")
498
+
499
+ src/tests/test_quality/test_duplicates.py:12:
500
+ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
501
+ .venv/lib/python3.12/site-packages/datasets/load.py:2151: in load_dataset
502
+ builder_instance.download_and_prepare(
503
+ .venv/lib/python3.12/site-packages/datasets/builder.py:924: in download_and_prepare
504
+ self._download_and_prepare(
505
+ .venv/lib/python3.12/site-packages/datasets/builder.py:1000: in _download_and_prepare
506
+ self._prepare_split(split_generator, **prepare_split_kwargs)
507
+ .venv/lib/python3.12/site-packages/datasets/builder.py:1741: in _prepare_split
508
+ for job_id, done, content in self._prepare_split_single(
509
+ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
510
+
511
+ self = <datasets.packaged_modules.parquet.parquet.ParquetDanish-dynaword object at 0x114c07bc0>
512
+ gen_kwargs = {'files': tracked_list(current=FilesIterable(current=/Users/au561649/Github/danish-dynaword/data/tv2r/tv2r.parquet))}
513
+ fpath = '/Users/au561649/.cache/huggingface/datasets/danish-dynaword/tv2r/0.0.0/5055500453bef830.incomplete/danish-dynaword-train-JJJJJ-SSSSS-of-NNNNN.arrow'
514
+ file_format = 'arrow', max_shard_size = 500000000, job_id = 0
515
+
516
+ def _prepare_split_single(
517
+ self, gen_kwargs: dict, fpath: str, file_format: str, max_shard_size: int, job_id: int
518
+ ) -> Iterable[Tuple[int, bool, Union[int, tuple]]]:
519
+ gen_kwargs = {k: tracked_list(v) if isinstance(v, list) else v for k, v in gen_kwargs.items()}
520
+ generator = self._generate_tables(**gen_kwargs)
521
+ writer_class = ParquetWriter if file_format == "parquet" else ArrowWriter
522
+ embed_local_files = file_format == "parquet"
523
+ shard_lengths = []
524
+ total_num_examples, total_num_bytes = 0, 0
525
+
526
+ shard_id = 0
527
+ num_examples_progress_update = 0
528
+ try:
529
+ writer = writer_class(
530
+ features=self.info.features,
531
+ path=fpath.replace("SSSSS", f"{shard_id:05d}").replace("JJJJJ", f"{job_id:05d}"),
532
+ writer_batch_size=self._writer_batch_size,
533
+ storage_options=self._fs.storage_options,
534
+ embed_local_files=embed_local_files,
535
+ )
536
+ try:
537
+ _time = time.time()
538
+ for _, table in generator:
539
+ if max_shard_size is not None and writer._num_bytes > max_shard_size:
540
+ num_examples, num_bytes = writer.finalize()
541
+ writer.close()
542
+ shard_lengths.append(num_examples)
543
+ total_num_examples += num_examples
544
+ total_num_bytes += num_bytes
545
+ shard_id += 1
546
+ writer = writer_class(
547
+ features=writer._features,
548
+ path=fpath.replace("SSSSS", f"{shard_id:05d}").replace("JJJJJ", f"{job_id:05d}"),
549
+ writer_batch_size=self._writer_batch_size,
550
+ storage_options=self._fs.storage_options,
551
+ embed_local_files=embed_local_files,
552
+ )
553
+ try:
554
+ writer.write_table(table)
555
+ except CastError as cast_error:
556
+ raise DatasetGenerationCastError.from_cast_error(
557
+ cast_error=cast_error,
558
+ builder_name=self.info.builder_name,
559
+ gen_kwargs=gen_kwargs,
560
+ token=self.token,
561
+ )
562
+ num_examples_progress_update += len(table)
563
+ if time.time() > _time + config.PBAR_REFRESH_TIME_INTERVAL:
564
+ _time = time.time()
565
+ yield job_id, False, num_examples_progress_update
566
+ num_examples_progress_update = 0
567
+ finally:
568
+ yield job_id, False, num_examples_progress_update
569
+ num_shards = shard_id + 1
570
+ num_examples, num_bytes = writer.finalize()
571
+ writer.close()
572
+ shard_lengths.append(num_examples)
573
+ total_num_examples += num_examples
574
+ total_num_bytes += num_bytes
575
+ except Exception as e:
576
+ # Ignore the writer's error for no examples written to the file if this error was caused by the error in _generate_examples before the first example was yielded
577
+ if isinstance(e, SchemaInferenceError) and e.__context__ is not None:
578
+ e = e.__context__
579
+ if isinstance(e, DatasetGenerationError):
580
+ raise
581
+ > raise DatasetGenerationError("An error occurred while generating the dataset") from e
582
+ E datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset
583
+
584
+ .venv/lib/python3.12/site-packages/datasets/builder.py:1897: DatasetGenerationError
585
+ ----------------------------- Captured stderr call -----------------------------
586
+
587
+ _____________________ test_no_within_data_duplicates[hest] _____________________
588
+
589
+ self = <datasets.packaged_modules.parquet.parquet.ParquetDanish-dynaword object at 0x1137b2360>
590
+ gen_kwargs = {'files': tracked_list(current=FilesIterable(current=/Users/au561649/Github/danish-dynaword/data/hest/hest.parquet))}
591
+ fpath = '/Users/au561649/.cache/huggingface/datasets/danish-dynaword/hest/0.0.0/5055500453bef830.incomplete/danish-dynaword-train-JJJJJ-SSSSS-of-NNNNN.arrow'
592
+ file_format = 'arrow', max_shard_size = 500000000, job_id = 0
593
+
594
+ def _prepare_split_single(
595
+ self, gen_kwargs: dict, fpath: str, file_format: str, max_shard_size: int, job_id: int
596
+ ) -> Iterable[Tuple[int, bool, Union[int, tuple]]]:
597
+ gen_kwargs = {k: tracked_list(v) if isinstance(v, list) else v for k, v in gen_kwargs.items()}
598
+ generator = self._generate_tables(**gen_kwargs)
599
+ writer_class = ParquetWriter if file_format == "parquet" else ArrowWriter
600
+ embed_local_files = file_format == "parquet"
601
+ shard_lengths = []
602
+ total_num_examples, total_num_bytes = 0, 0
603
+
604
+ shard_id = 0
605
+ num_examples_progress_update = 0
606
+ try:
607
+ writer = writer_class(
608
+ features=self.info.features,
609
+ path=fpath.replace("SSSSS", f"{shard_id:05d}").replace("JJJJJ", f"{job_id:05d}"),
610
+ writer_batch_size=self._writer_batch_size,
611
+ storage_options=self._fs.storage_options,
612
+ embed_local_files=embed_local_files,
613
+ )
614
+ try:
615
+ _time = time.time()
616
+ for _, table in generator:
617
+ if max_shard_size is not None and writer._num_bytes > max_shard_size:
618
+ num_examples, num_bytes = writer.finalize()
619
+ writer.close()
620
+ shard_lengths.append(num_examples)
621
+ total_num_examples += num_examples
622
+ total_num_bytes += num_bytes
623
+ shard_id += 1
624
+ writer = writer_class(
625
+ features=writer._features,
626
+ path=fpath.replace("SSSSS", f"{shard_id:05d}").replace("JJJJJ", f"{job_id:05d}"),
627
+ writer_batch_size=self._writer_batch_size,
628
+ storage_options=self._fs.storage_options,
629
+ embed_local_files=embed_local_files,
630
+ )
631
+ try:
632
+ > writer.write_table(table)
633
+
634
+ .venv/lib/python3.12/site-packages/datasets/builder.py:1870:
635
+ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
636
+ .venv/lib/python3.12/site-packages/datasets/arrow_writer.py:627: in write_table
637
+ self.pa_writer.write_table(pa_table, writer_batch_size)
638
+ pyarrow/ipc.pxi:529: in pyarrow.lib._CRecordBatchWriter.write_table
639
+ ???
640
+ pyarrow/error.pxi:89: in pyarrow.lib.check_status
641
+ ???
642
+ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
643
+
644
+ self = <fsspec.implementations.local.LocalFileOpener object at 0x114af1390>
645
+ args = (<pyarrow.Buffer address=0x5e004020000 size=147880457 is_cpu=True is_mutable=True>,)
646
+ kwargs = {}
647
+
648
+ def write(self, *args, **kwargs):
649
+ > return self.f.write(*args, **kwargs)
650
+ E OSError: [Errno 28] No space left on device
651
+
652
+ .venv/lib/python3.12/site-packages/fsspec/implementations/local.py:426: OSError
653
+
654
+ The above exception was the direct cause of the following exception:
655
+
656
+ dataset_name = 'hest'
657
+
658
+ @pytest.mark.parametrize("dataset_name", DATASET_NAMES)
659
+ def test_no_within_data_duplicates(dataset_name: str):
660
+ > ds = load_dataset(str(repo_path.resolve()), dataset_name, split="train")
661
+
662
+ src/tests/test_quality/test_duplicates.py:12:
663
+ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
664
+ .venv/lib/python3.12/site-packages/datasets/load.py:2151: in load_dataset
665
+ builder_instance.download_and_prepare(
666
+ .venv/lib/python3.12/site-packages/datasets/builder.py:924: in download_and_prepare
667
+ self._download_and_prepare(
668
+ .venv/lib/python3.12/site-packages/datasets/builder.py:1000: in _download_and_prepare
669
+ self._prepare_split(split_generator, **prepare_split_kwargs)
670
+ .venv/lib/python3.12/site-packages/datasets/builder.py:1741: in _prepare_split
671
+ for job_id, done, content in self._prepare_split_single(
672
+ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
673
+
674
+ self = <datasets.packaged_modules.parquet.parquet.ParquetDanish-dynaword object at 0x1137b2360>
675
+ gen_kwargs = {'files': tracked_list(current=FilesIterable(current=/Users/au561649/Github/danish-dynaword/data/hest/hest.parquet))}
676
+ fpath = '/Users/au561649/.cache/huggingface/datasets/danish-dynaword/hest/0.0.0/5055500453bef830.incomplete/danish-dynaword-train-JJJJJ-SSSSS-of-NNNNN.arrow'
677
+ file_format = 'arrow', max_shard_size = 500000000, job_id = 0
678
+
679
+ def _prepare_split_single(
680
+ self, gen_kwargs: dict, fpath: str, file_format: str, max_shard_size: int, job_id: int
681
+ ) -> Iterable[Tuple[int, bool, Union[int, tuple]]]:
682
+ gen_kwargs = {k: tracked_list(v) if isinstance(v, list) else v for k, v in gen_kwargs.items()}
683
+ generator = self._generate_tables(**gen_kwargs)
684
+ writer_class = ParquetWriter if file_format == "parquet" else ArrowWriter
685
+ embed_local_files = file_format == "parquet"
686
+ shard_lengths = []
687
+ total_num_examples, total_num_bytes = 0, 0
688
+
689
+ shard_id = 0
690
+ num_examples_progress_update = 0
691
+ try:
692
+ writer = writer_class(
693
+ features=self.info.features,
694
+ path=fpath.replace("SSSSS", f"{shard_id:05d}").replace("JJJJJ", f"{job_id:05d}"),
695
+ writer_batch_size=self._writer_batch_size,
696
+ storage_options=self._fs.storage_options,
697
+ embed_local_files=embed_local_files,
698
+ )
699
+ try:
700
+ _time = time.time()
701
+ for _, table in generator:
702
+ if max_shard_size is not None and writer._num_bytes > max_shard_size:
703
+ num_examples, num_bytes = writer.finalize()
704
+ writer.close()
705
+ shard_lengths.append(num_examples)
706
+ total_num_examples += num_examples
707
+ total_num_bytes += num_bytes
708
+ shard_id += 1
709
+ writer = writer_class(
710
+ features=writer._features,
711
+ path=fpath.replace("SSSSS", f"{shard_id:05d}").replace("JJJJJ", f"{job_id:05d}"),
712
+ writer_batch_size=self._writer_batch_size,
713
+ storage_options=self._fs.storage_options,
714
+ embed_local_files=embed_local_files,
715
+ )
716
+ try:
717
+ writer.write_table(table)
718
+ except CastError as cast_error:
719
+ raise DatasetGenerationCastError.from_cast_error(
720
+ cast_error=cast_error,
721
+ builder_name=self.info.builder_name,
722
+ gen_kwargs=gen_kwargs,
723
+ token=self.token,
724
+ )
725
+ num_examples_progress_update += len(table)
726
+ if time.time() > _time + config.PBAR_REFRESH_TIME_INTERVAL:
727
+ _time = time.time()
728
+ yield job_id, False, num_examples_progress_update
729
+ num_examples_progress_update = 0
730
+ finally:
731
+ yield job_id, False, num_examples_progress_update
732
+ num_shards = shard_id + 1
733
+ num_examples, num_bytes = writer.finalize()
734
+ writer.close()
735
+ shard_lengths.append(num_examples)
736
+ total_num_examples += num_examples
737
+ total_num_bytes += num_bytes
738
+ except Exception as e:
739
+ # Ignore the writer's error for no examples written to the file if this error was caused by the error in _generate_examples before the first example was yielded
740
+ if isinstance(e, SchemaInferenceError) and e.__context__ is not None:
741
+ e = e.__context__
742
+ if isinstance(e, DatasetGenerationError):
743
+ raise
744
+ > raise DatasetGenerationError("An error occurred while generating the dataset") from e
745
+ E datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset
746
+
747
+ .venv/lib/python3.12/site-packages/datasets/builder.py:1897: DatasetGenerationError
748
+ ----------------------------- Captured stderr call -----------------------------
749
+
750
+ ________________________ test_no_one_word_documents[ep] ________________________
751
+
752
+ self = <datasets.packaged_modules.parquet.parquet.ParquetDanish-dynaword object at 0x114c1bb90>
753
+ gen_kwargs = {'files': tracked_list(current=FilesIterable(current=/Users/au561649/Github/danish-dynaword/data/ep/ep.parquet))}
754
+ fpath = '/Users/au561649/.cache/huggingface/datasets/danish-dynaword/ep/0.0.0/5055500453bef830.incomplete/danish-dynaword-train-JJJJJ-SSSSS-of-NNNNN.arrow'
755
+ file_format = 'arrow', max_shard_size = 500000000, job_id = 0
756
+
757
+ def _prepare_split_single(
758
+ self, gen_kwargs: dict, fpath: str, file_format: str, max_shard_size: int, job_id: int
759
+ ) -> Iterable[Tuple[int, bool, Union[int, tuple]]]:
760
+ gen_kwargs = {k: tracked_list(v) if isinstance(v, list) else v for k, v in gen_kwargs.items()}
761
+ generator = self._generate_tables(**gen_kwargs)
762
+ writer_class = ParquetWriter if file_format == "parquet" else ArrowWriter
763
+ embed_local_files = file_format == "parquet"
764
+ shard_lengths = []
765
+ total_num_examples, total_num_bytes = 0, 0
766
+
767
+ shard_id = 0
768
+ num_examples_progress_update = 0
769
+ try:
770
+ writer = writer_class(
771
+ features=self.info.features,
772
+ path=fpath.replace("SSSSS", f"{shard_id:05d}").replace("JJJJJ", f"{job_id:05d}"),
773
+ writer_batch_size=self._writer_batch_size,
774
+ storage_options=self._fs.storage_options,
775
+ embed_local_files=embed_local_files,
776
+ )
777
+ try:
778
+ _time = time.time()
779
+ for _, table in generator:
780
+ if max_shard_size is not None and writer._num_bytes > max_shard_size:
781
+ num_examples, num_bytes = writer.finalize()
782
+ writer.close()
783
+ shard_lengths.append(num_examples)
784
+ total_num_examples += num_examples
785
+ total_num_bytes += num_bytes
786
+ shard_id += 1
787
+ writer = writer_class(
788
+ features=writer._features,
789
+ path=fpath.replace("SSSSS", f"{shard_id:05d}").replace("JJJJJ", f"{job_id:05d}"),
790
+ writer_batch_size=self._writer_batch_size,
791
+ storage_options=self._fs.storage_options,
792
+ embed_local_files=embed_local_files,
793
+ )
794
+ try:
795
+ > writer.write_table(table)
796
+
797
+ .venv/lib/python3.12/site-packages/datasets/builder.py:1870:
798
+ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
799
+ .venv/lib/python3.12/site-packages/datasets/arrow_writer.py:627: in write_table
800
+ self.pa_writer.write_table(pa_table, writer_batch_size)
801
+ pyarrow/ipc.pxi:529: in pyarrow.lib._CRecordBatchWriter.write_table
802
+ ???
803
+ pyarrow/error.pxi:89: in pyarrow.lib.check_status
804
+ ???
805
+ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
806
+
807
+ self = <fsspec.implementations.local.LocalFileOpener object at 0x113e86290>
808
+ args = (<pyarrow.Buffer address=0x5e1f0020000 size=76944794 is_cpu=True is_mutable=True>,)
809
+ kwargs = {}
810
+
811
+ def write(self, *args, **kwargs):
812
+ > return self.f.write(*args, **kwargs)
813
+ E OSError: [Errno 28] No space left on device
814
+
815
+ .venv/lib/python3.12/site-packages/fsspec/implementations/local.py:426: OSError
816
+
817
+ The above exception was the direct cause of the following exception:
818
+
819
+ dataset_name = 'ep'
820
+
821
+ @pytest.mark.parametrize("dataset_name", DATASET_NAMES)
822
+ # @pytest.mark.skip("This tests currently fails")
823
+ def test_no_one_word_documents(dataset_name: str):
824
+ > ds = load_dataset(str(repo_path.resolve()), dataset_name, split="train")
825
+
826
+ src/tests/test_quality/test_short_texts.py:14:
827
+ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
828
+ .venv/lib/python3.12/site-packages/datasets/load.py:2151: in load_dataset
829
+ builder_instance.download_and_prepare(
830
+ .venv/lib/python3.12/site-packages/datasets/builder.py:924: in download_and_prepare
831
+ self._download_and_prepare(
832
+ .venv/lib/python3.12/site-packages/datasets/builder.py:1000: in _download_and_prepare
833
+ self._prepare_split(split_generator, **prepare_split_kwargs)
834
+ .venv/lib/python3.12/site-packages/datasets/builder.py:1741: in _prepare_split
835
+ for job_id, done, content in self._prepare_split_single(
836
+ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
837
+
838
+ self = <datasets.packaged_modules.parquet.parquet.ParquetDanish-dynaword object at 0x114c1bb90>
839
+ gen_kwargs = {'files': tracked_list(current=FilesIterable(current=/Users/au561649/Github/danish-dynaword/data/ep/ep.parquet))}
840
+ fpath = '/Users/au561649/.cache/huggingface/datasets/danish-dynaword/ep/0.0.0/5055500453bef830.incomplete/danish-dynaword-train-JJJJJ-SSSSS-of-NNNNN.arrow'
841
+ file_format = 'arrow', max_shard_size = 500000000, job_id = 0
842
+
843
+ def _prepare_split_single(
844
+ self, gen_kwargs: dict, fpath: str, file_format: str, max_shard_size: int, job_id: int
845
+ ) -> Iterable[Tuple[int, bool, Union[int, tuple]]]:
846
+ gen_kwargs = {k: tracked_list(v) if isinstance(v, list) else v for k, v in gen_kwargs.items()}
847
+ generator = self._generate_tables(**gen_kwargs)
848
+ writer_class = ParquetWriter if file_format == "parquet" else ArrowWriter
849
+ embed_local_files = file_format == "parquet"
850
+ shard_lengths = []
851
+ total_num_examples, total_num_bytes = 0, 0
852
+
853
+ shard_id = 0
854
+ num_examples_progress_update = 0
855
+ try:
856
+ writer = writer_class(
857
+ features=self.info.features,
858
+ path=fpath.replace("SSSSS", f"{shard_id:05d}").replace("JJJJJ", f"{job_id:05d}"),
859
+ writer_batch_size=self._writer_batch_size,
860
+ storage_options=self._fs.storage_options,
861
+ embed_local_files=embed_local_files,
862
+ )
863
+ try:
864
+ _time = time.time()
865
+ for _, table in generator:
866
+ if max_shard_size is not None and writer._num_bytes > max_shard_size:
867
+ num_examples, num_bytes = writer.finalize()
868
+ writer.close()
869
+ shard_lengths.append(num_examples)
870
+ total_num_examples += num_examples
871
+ total_num_bytes += num_bytes
872
+ shard_id += 1
873
+ writer = writer_class(
874
+ features=writer._features,
875
+ path=fpath.replace("SSSSS", f"{shard_id:05d}").replace("JJJJJ", f"{job_id:05d}"),
876
+ writer_batch_size=self._writer_batch_size,
877
+ storage_options=self._fs.storage_options,
878
+ embed_local_files=embed_local_files,
879
+ )
880
+ try:
881
+ writer.write_table(table)
882
+ except CastError as cast_error:
883
+ raise DatasetGenerationCastError.from_cast_error(
884
+ cast_error=cast_error,
885
+ builder_name=self.info.builder_name,
886
+ gen_kwargs=gen_kwargs,
887
+ token=self.token,
888
+ )
889
+ num_examples_progress_update += len(table)
890
+ if time.time() > _time + config.PBAR_REFRESH_TIME_INTERVAL:
891
+ _time = time.time()
892
+ yield job_id, False, num_examples_progress_update
893
+ num_examples_progress_update = 0
894
+ finally:
895
+ yield job_id, False, num_examples_progress_update
896
+ num_shards = shard_id + 1
897
+ num_examples, num_bytes = writer.finalize()
898
+ writer.close()
899
+ shard_lengths.append(num_examples)
900
+ total_num_examples += num_examples
901
+ total_num_bytes += num_bytes
902
+ except Exception as e:
903
+ # Ignore the writer's error for no examples written to the file if this error was caused by the error in _generate_examples before the first example was yielded
904
+ if isinstance(e, SchemaInferenceError) and e.__context__ is not None:
905
+ e = e.__context__
906
+ if isinstance(e, DatasetGenerationError):
907
+ raise
908
+ > raise DatasetGenerationError("An error occurred while generating the dataset") from e
909
+ E datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset
910
+
911
+ .venv/lib/python3.12/site-packages/datasets/builder.py:1897: DatasetGenerationError
912
+ ----------------------------- Captured stderr call -----------------------------
913
+
914
+ ________________________ test_no_one_word_documents[ft] ________________________
915
+
916
+ self = <datasets.packaged_modules.parquet.parquet.ParquetDanish-dynaword object at 0x12e558620>
917
+ gen_kwargs = {'files': tracked_list(current=FilesIterable(current=/Users/au561649/Github/danish-dynaword/data/ft/ft.parquet))}
918
+ fpath = '/Users/au561649/.cache/huggingface/datasets/danish-dynaword/ft/0.0.0/5055500453bef830.incomplete/danish-dynaword-train-JJJJJ-SSSSS-of-NNNNN.arrow'
919
+ file_format = 'arrow', max_shard_size = 500000000, job_id = 0
920
+
921
+ def _prepare_split_single(
922
+ self, gen_kwargs: dict, fpath: str, file_format: str, max_shard_size: int, job_id: int
923
+ ) -> Iterable[Tuple[int, bool, Union[int, tuple]]]:
924
+ gen_kwargs = {k: tracked_list(v) if isinstance(v, list) else v for k, v in gen_kwargs.items()}
925
+ generator = self._generate_tables(**gen_kwargs)
926
+ writer_class = ParquetWriter if file_format == "parquet" else ArrowWriter
927
+ embed_local_files = file_format == "parquet"
928
+ shard_lengths = []
929
+ total_num_examples, total_num_bytes = 0, 0
930
+
931
+ shard_id = 0
932
+ num_examples_progress_update = 0
933
+ try:
934
+ writer = writer_class(
935
+ features=self.info.features,
936
+ path=fpath.replace("SSSSS", f"{shard_id:05d}").replace("JJJJJ", f"{job_id:05d}"),
937
+ writer_batch_size=self._writer_batch_size,
938
+ storage_options=self._fs.storage_options,
939
+ embed_local_files=embed_local_files,
940
+ )
941
+ try:
942
+ _time = time.time()
943
+ for _, table in generator:
944
+ if max_shard_size is not None and writer._num_bytes > max_shard_size:
945
+ num_examples, num_bytes = writer.finalize()
946
+ writer.close()
947
+ shard_lengths.append(num_examples)
948
+ total_num_examples += num_examples
949
+ total_num_bytes += num_bytes
950
+ shard_id += 1
951
+ writer = writer_class(
952
+ features=writer._features,
953
+ path=fpath.replace("SSSSS", f"{shard_id:05d}").replace("JJJJJ", f"{job_id:05d}"),
954
+ writer_batch_size=self._writer_batch_size,
955
+ storage_options=self._fs.storage_options,
956
+ embed_local_files=embed_local_files,
957
+ )
958
+ try:
959
+ > writer.write_table(table)
960
+
961
+ .venv/lib/python3.12/site-packages/datasets/builder.py:1870:
962
+ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
963
+ .venv/lib/python3.12/site-packages/datasets/arrow_writer.py:627: in write_table
964
+ self.pa_writer.write_table(pa_table, writer_batch_size)
965
+ pyarrow/ipc.pxi:529: in pyarrow.lib._CRecordBatchWriter.write_table
966
+ ???
967
+ pyarrow/error.pxi:89: in pyarrow.lib.check_status
968
+ ???
969
+ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
970
+
971
+ self = <fsspec.implementations.local.LocalFileOpener object at 0x113eb1d50>
972
+ args = (<pyarrow.Buffer address=0x5e238020000 size=274397630 is_cpu=True is_mutable=True>,)
973
+ kwargs = {}
974
+
975
+ def write(self, *args, **kwargs):
976
+ > return self.f.write(*args, **kwargs)
977
+ E OSError: [Errno 28] No space left on device
978
+
979
+ .venv/lib/python3.12/site-packages/fsspec/implementations/local.py:426: OSError
980
+
981
+ The above exception was the direct cause of the following exception:
982
+
983
+ dataset_name = 'ft'
984
+
985
+ @pytest.mark.parametrize("dataset_name", DATASET_NAMES)
986
+ # @pytest.mark.skip("This tests currently fails")
987
+ def test_no_one_word_documents(dataset_name: str):
988
+ > ds = load_dataset(str(repo_path.resolve()), dataset_name, split="train")
989
+
990
+ src/tests/test_quality/test_short_texts.py:14:
991
+ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
992
+ .venv/lib/python3.12/site-packages/datasets/load.py:2151: in load_dataset
993
+ builder_instance.download_and_prepare(
994
+ .venv/lib/python3.12/site-packages/datasets/builder.py:924: in download_and_prepare
995
+ self._download_and_prepare(
996
+ .venv/lib/python3.12/site-packages/datasets/builder.py:1000: in _download_and_prepare
997
+ self._prepare_split(split_generator, **prepare_split_kwargs)
998
+ .venv/lib/python3.12/site-packages/datasets/builder.py:1741: in _prepare_split
999
+ for job_id, done, content in self._prepare_split_single(
1000
+ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
1001
+
1002
+ self = <datasets.packaged_modules.parquet.parquet.ParquetDanish-dynaword object at 0x12e558620>
1003
+ gen_kwargs = {'files': tracked_list(current=FilesIterable(current=/Users/au561649/Github/danish-dynaword/data/ft/ft.parquet))}
1004
+ fpath = '/Users/au561649/.cache/huggingface/datasets/danish-dynaword/ft/0.0.0/5055500453bef830.incomplete/danish-dynaword-train-JJJJJ-SSSSS-of-NNNNN.arrow'
1005
+ file_format = 'arrow', max_shard_size = 500000000, job_id = 0
1006
+
1007
+ def _prepare_split_single(
1008
+ self, gen_kwargs: dict, fpath: str, file_format: str, max_shard_size: int, job_id: int
1009
+ ) -> Iterable[Tuple[int, bool, Union[int, tuple]]]:
1010
+ gen_kwargs = {k: tracked_list(v) if isinstance(v, list) else v for k, v in gen_kwargs.items()}
1011
+ generator = self._generate_tables(**gen_kwargs)
1012
+ writer_class = ParquetWriter if file_format == "parquet" else ArrowWriter
1013
+ embed_local_files = file_format == "parquet"
1014
+ shard_lengths = []
1015
+ total_num_examples, total_num_bytes = 0, 0
1016
+
1017
+ shard_id = 0
1018
+ num_examples_progress_update = 0
1019
+ try:
1020
+ writer = writer_class(
1021
+ features=self.info.features,
1022
+ path=fpath.replace("SSSSS", f"{shard_id:05d}").replace("JJJJJ", f"{job_id:05d}"),
1023
+ writer_batch_size=self._writer_batch_size,
1024
+ storage_options=self._fs.storage_options,
1025
+ embed_local_files=embed_local_files,
1026
+ )
1027
+ try:
1028
+ _time = time.time()
1029
+ for _, table in generator:
1030
+ if max_shard_size is not None and writer._num_bytes > max_shard_size:
1031
+ num_examples, num_bytes = writer.finalize()
1032
+ writer.close()
1033
+ shard_lengths.append(num_examples)
1034
+ total_num_examples += num_examples
1035
+ total_num_bytes += num_bytes
1036
+ shard_id += 1
1037
+ writer = writer_class(
1038
+ features=writer._features,
1039
+ path=fpath.replace("SSSSS", f"{shard_id:05d}").replace("JJJJJ", f"{job_id:05d}"),
1040
+ writer_batch_size=self._writer_batch_size,
1041
+ storage_options=self._fs.storage_options,
1042
+ embed_local_files=embed_local_files,
1043
+ )
1044
+ try:
1045
+ writer.write_table(table)
1046
+ except CastError as cast_error:
1047
+ raise DatasetGenerationCastError.from_cast_error(
1048
+ cast_error=cast_error,
1049
+ builder_name=self.info.builder_name,
1050
+ gen_kwargs=gen_kwargs,
1051
+ token=self.token,
1052
+ )
1053
+ num_examples_progress_update += len(table)
1054
+ if time.time() > _time + config.PBAR_REFRESH_TIME_INTERVAL:
1055
+ _time = time.time()
1056
+ yield job_id, False, num_examples_progress_update
1057
+ num_examples_progress_update = 0
1058
+ finally:
1059
+ yield job_id, False, num_examples_progress_update
1060
+ num_shards = shard_id + 1
1061
+ num_examples, num_bytes = writer.finalize()
1062
+ writer.close()
1063
+ shard_lengths.append(num_examples)
1064
+ total_num_examples += num_examples
1065
+ total_num_bytes += num_bytes
1066
+ except Exception as e:
1067
+ # Ignore the writer's error for no examples written to the file if this error was caused by the error in _generate_examples before the first example was yielded
1068
+ if isinstance(e, SchemaInferenceError) and e.__context__ is not None:
1069
+ e = e.__context__
1070
+ if isinstance(e, DatasetGenerationError):
1071
+ raise
1072
+ > raise DatasetGenerationError("An error occurred while generating the dataset") from e
1073
+ E datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset
1074
+
1075
+ .venv/lib/python3.12/site-packages/datasets/builder.py:1897: DatasetGenerationError
1076
+ ----------------------------- Captured stderr call -----------------------------
1077
+
1078
+ _______________________ test_no_one_word_documents[hest] _______________________
1079
+
1080
+ self = <datasets.packaged_modules.parquet.parquet.ParquetDanish-dynaword object at 0x118b3f1a0>
1081
+ gen_kwargs = {'files': tracked_list(current=FilesIterable(current=/Users/au561649/Github/danish-dynaword/data/hest/hest.parquet))}
1082
+ fpath = '/Users/au561649/.cache/huggingface/datasets/danish-dynaword/hest/0.0.0/5055500453bef830.incomplete/danish-dynaword-train-JJJJJ-SSSSS-of-NNNNN.arrow'
1083
+ file_format = 'arrow', max_shard_size = 500000000, job_id = 0
1084
+
1085
+ def _prepare_split_single(
1086
+ self, gen_kwargs: dict, fpath: str, file_format: str, max_shard_size: int, job_id: int
1087
+ ) -> Iterable[Tuple[int, bool, Union[int, tuple]]]:
1088
+ gen_kwargs = {k: tracked_list(v) if isinstance(v, list) else v for k, v in gen_kwargs.items()}
1089
+ generator = self._generate_tables(**gen_kwargs)
1090
+ writer_class = ParquetWriter if file_format == "parquet" else ArrowWriter
1091
+ embed_local_files = file_format == "parquet"
1092
+ shard_lengths = []
1093
+ total_num_examples, total_num_bytes = 0, 0
1094
+
1095
+ shard_id = 0
1096
+ num_examples_progress_update = 0
1097
+ try:
1098
+ writer = writer_class(
1099
+ features=self.info.features,
1100
+ path=fpath.replace("SSSSS", f"{shard_id:05d}").replace("JJJJJ", f"{job_id:05d}"),
1101
+ writer_batch_size=self._writer_batch_size,
1102
+ storage_options=self._fs.storage_options,
1103
+ embed_local_files=embed_local_files,
1104
+ )
1105
+ try:
1106
+ _time = time.time()
1107
+ for _, table in generator:
1108
+ if max_shard_size is not None and writer._num_bytes > max_shard_size:
1109
+ num_examples, num_bytes = writer.finalize()
1110
+ writer.close()
1111
+ shard_lengths.append(num_examples)
1112
+ total_num_examples += num_examples
1113
+ total_num_bytes += num_bytes
1114
+ shard_id += 1
1115
+ writer = writer_class(
1116
+ features=writer._features,
1117
+ path=fpath.replace("SSSSS", f"{shard_id:05d}").replace("JJJJJ", f"{job_id:05d}"),
1118
+ writer_batch_size=self._writer_batch_size,
1119
+ storage_options=self._fs.storage_options,
1120
+ embed_local_files=embed_local_files,
1121
+ )
1122
+ try:
1123
+ > writer.write_table(table)
1124
+
1125
+ .venv/lib/python3.12/site-packages/datasets/builder.py:1870:
1126
+ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
1127
+ .venv/lib/python3.12/site-packages/datasets/arrow_writer.py:627: in write_table
1128
+ self.pa_writer.write_table(pa_table, writer_batch_size)
1129
+ pyarrow/ipc.pxi:529: in pyarrow.lib._CRecordBatchWriter.write_table
1130
+ ???
1131
+ pyarrow/error.pxi:89: in pyarrow.lib.check_status
1132
+ ???
1133
+ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
1134
+
1135
+ self = <fsspec.implementations.local.LocalFileOpener object at 0x113e85810>
1136
+ args = (<pyarrow.Buffer address=0x5e3c8020000 size=95688808 is_cpu=True is_mutable=True>,)
1137
+ kwargs = {}
1138
+
1139
+ def write(self, *args, **kwargs):
1140
+ > return self.f.write(*args, **kwargs)
1141
+ E OSError: [Errno 28] No space left on device
1142
+
1143
+ .venv/lib/python3.12/site-packages/fsspec/implementations/local.py:426: OSError
1144
+
1145
+ The above exception was the direct cause of the following exception:
1146
+
1147
+ dataset_name = 'hest'
1148
+
1149
+ @pytest.mark.parametrize("dataset_name", DATASET_NAMES)
1150
+ # @pytest.mark.skip("This tests currently fails")
1151
+ def test_no_one_word_documents(dataset_name: str):
1152
+ > ds = load_dataset(str(repo_path.resolve()), dataset_name, split="train")
1153
+
1154
+ src/tests/test_quality/test_short_texts.py:14:
1155
+ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
1156
+ .venv/lib/python3.12/site-packages/datasets/load.py:2151: in load_dataset
1157
+ builder_instance.download_and_prepare(
1158
+ .venv/lib/python3.12/site-packages/datasets/builder.py:924: in download_and_prepare
1159
+ self._download_and_prepare(
1160
+ .venv/lib/python3.12/site-packages/datasets/builder.py:1000: in _download_and_prepare
1161
+ self._prepare_split(split_generator, **prepare_split_kwargs)
1162
+ .venv/lib/python3.12/site-packages/datasets/builder.py:1741: in _prepare_split
1163
+ for job_id, done, content in self._prepare_split_single(
1164
+ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
1165
+
1166
+ self = <datasets.packaged_modules.parquet.parquet.ParquetDanish-dynaword object at 0x118b3f1a0>
1167
+ gen_kwargs = {'files': tracked_list(current=FilesIterable(current=/Users/au561649/Github/danish-dynaword/data/hest/hest.parquet))}
1168
+ fpath = '/Users/au561649/.cache/huggingface/datasets/danish-dynaword/hest/0.0.0/5055500453bef830.incomplete/danish-dynaword-train-JJJJJ-SSSSS-of-NNNNN.arrow'
1169
+ file_format = 'arrow', max_shard_size = 500000000, job_id = 0
1170
+
1171
+ def _prepare_split_single(
1172
+ self, gen_kwargs: dict, fpath: str, file_format: str, max_shard_size: int, job_id: int
1173
+ ) -> Iterable[Tuple[int, bool, Union[int, tuple]]]:
1174
+ gen_kwargs = {k: tracked_list(v) if isinstance(v, list) else v for k, v in gen_kwargs.items()}
1175
+ generator = self._generate_tables(**gen_kwargs)
1176
+ writer_class = ParquetWriter if file_format == "parquet" else ArrowWriter
1177
+ embed_local_files = file_format == "parquet"
1178
+ shard_lengths = []
1179
+ total_num_examples, total_num_bytes = 0, 0
1180
+
1181
+ shard_id = 0
1182
+ num_examples_progress_update = 0
1183
+ try:
1184
+ writer = writer_class(
1185
+ features=self.info.features,
1186
+ path=fpath.replace("SSSSS", f"{shard_id:05d}").replace("JJJJJ", f"{job_id:05d}"),
1187
+ writer_batch_size=self._writer_batch_size,
1188
+ storage_options=self._fs.storage_options,
1189
+ embed_local_files=embed_local_files,
1190
+ )
1191
+ try:
1192
+ _time = time.time()
1193
+ for _, table in generator:
1194
+ if max_shard_size is not None and writer._num_bytes > max_shard_size:
1195
+ num_examples, num_bytes = writer.finalize()
1196
+ writer.close()
1197
+ shard_lengths.append(num_examples)
1198
+ total_num_examples += num_examples
1199
+ total_num_bytes += num_bytes
1200
+ shard_id += 1
1201
+ writer = writer_class(
1202
+ features=writer._features,
1203
+ path=fpath.replace("SSSSS", f"{shard_id:05d}").replace("JJJJJ", f"{job_id:05d}"),
1204
+ writer_batch_size=self._writer_batch_size,
1205
+ storage_options=self._fs.storage_options,
1206
+ embed_local_files=embed_local_files,
1207
+ )
1208
+ try:
1209
+ writer.write_table(table)
1210
+ except CastError as cast_error:
1211
+ raise DatasetGenerationCastError.from_cast_error(
1212
+ cast_error=cast_error,
1213
+ builder_name=self.info.builder_name,
1214
+ gen_kwargs=gen_kwargs,
1215
+ token=self.token,
1216
+ )
1217
+ num_examples_progress_update += len(table)
1218
+ if time.time() > _time + config.PBAR_REFRESH_TIME_INTERVAL:
1219
+ _time = time.time()
1220
+ yield job_id, False, num_examples_progress_update
1221
+ num_examples_progress_update = 0
1222
+ finally:
1223
+ yield job_id, False, num_examples_progress_update
1224
+ num_shards = shard_id + 1
1225
+ num_examples, num_bytes = writer.finalize()
1226
+ writer.close()
1227
+ shard_lengths.append(num_examples)
1228
+ total_num_examples += num_examples
1229
+ total_num_bytes += num_bytes
1230
+ except Exception as e:
1231
+ # Ignore the writer's error for no examples written to the file if this error was caused by the error in _generate_examples before the first example was yielded
1232
+ if isinstance(e, SchemaInferenceError) and e.__context__ is not None:
1233
+ e = e.__context__
1234
+ if isinstance(e, DatasetGenerationError):
1235
+ raise
1236
+ > raise DatasetGenerationError("An error occurred while generating the dataset") from e
1237
+ E datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset
1238
+
1239
+ .venv/lib/python3.12/site-packages/datasets/builder.py:1897: DatasetGenerationError
1240
+ ----------------------------- Captured stderr call -----------------------------
1241
+
1242
+ __________________________ test_ensure_ids_are_unique __________________________
1243
+
1244
+ self = <datasets.packaged_modules.parquet.parquet.ParquetDanish-dynaword object at 0x113ec1970>
1245
+ gen_kwargs = {'files': tracked_list(current=FilesIterable(current=/Users/au561649/Github/danish-dynaword/data/cellar/cellar.parquet))}
1246
+ fpath = '/Users/au561649/.cache/huggingface/datasets/danish-dynaword/default/0.0.0/5055500453bef830.incomplete/danish-dynaword-train-JJJJJ-SSSSS-of-NNNNN.arrow'
1247
+ file_format = 'arrow', max_shard_size = 500000000, job_id = 0
1248
+
1249
+ def _prepare_split_single(
1250
+ self, gen_kwargs: dict, fpath: str, file_format: str, max_shard_size: int, job_id: int
1251
+ ) -> Iterable[Tuple[int, bool, Union[int, tuple]]]:
1252
+ gen_kwargs = {k: tracked_list(v) if isinstance(v, list) else v for k, v in gen_kwargs.items()}
1253
+ generator = self._generate_tables(**gen_kwargs)
1254
+ writer_class = ParquetWriter if file_format == "parquet" else ArrowWriter
1255
+ embed_local_files = file_format == "parquet"
1256
+ shard_lengths = []
1257
+ total_num_examples, total_num_bytes = 0, 0
1258
+
1259
+ shard_id = 0
1260
+ num_examples_progress_update = 0
1261
+ try:
1262
+ writer = writer_class(
1263
+ features=self.info.features,
1264
+ path=fpath.replace("SSSSS", f"{shard_id:05d}").replace("JJJJJ", f"{job_id:05d}"),
1265
+ writer_batch_size=self._writer_batch_size,
1266
+ storage_options=self._fs.storage_options,
1267
+ embed_local_files=embed_local_files,
1268
+ )
1269
+ try:
1270
+ _time = time.time()
1271
+ for _, table in generator:
1272
+ if max_shard_size is not None and writer._num_bytes > max_shard_size:
1273
+ num_examples, num_bytes = writer.finalize()
1274
+ writer.close()
1275
+ shard_lengths.append(num_examples)
1276
+ total_num_examples += num_examples
1277
+ total_num_bytes += num_bytes
1278
+ shard_id += 1
1279
+ writer = writer_class(
1280
+ features=writer._features,
1281
+ path=fpath.replace("SSSSS", f"{shard_id:05d}").replace("JJJJJ", f"{job_id:05d}"),
1282
+ writer_batch_size=self._writer_batch_size,
1283
+ storage_options=self._fs.storage_options,
1284
+ embed_local_files=embed_local_files,
1285
+ )
1286
+ try:
1287
+ > writer.write_table(table)
1288
+
1289
+ .venv/lib/python3.12/site-packages/datasets/builder.py:1870:
1290
+ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
1291
+ .venv/lib/python3.12/site-packages/datasets/arrow_writer.py:627: in write_table
1292
+ self.pa_writer.write_table(pa_table, writer_batch_size)
1293
+ pyarrow/ipc.pxi:529: in pyarrow.lib._CRecordBatchWriter.write_table
1294
+ ???
1295
+ pyarrow/error.pxi:89: in pyarrow.lib.check_status
1296
+ ???
1297
+ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
1298
+
1299
+ self = <fsspec.implementations.local.LocalFileOpener object at 0x113aaffd0>
1300
+ args = (<pyarrow.Buffer address=0x5e500020000 size=81139164 is_cpu=True is_mutable=True>,)
1301
+ kwargs = {}
1302
+
1303
+ def write(self, *args, **kwargs):
1304
+ > return self.f.write(*args, **kwargs)
1305
+ E OSError: [Errno 28] No space left on device
1306
+
1307
+ .venv/lib/python3.12/site-packages/fsspec/implementations/local.py:426: OSError
1308
+
1309
+ The above exception was the direct cause of the following exception:
1310
+
1311
+ def test_ensure_ids_are_unique():
1312
+ name = str(repo_path.resolve())
1313
+ > ds = load_dataset(name, split="train")
1314
+
1315
+ src/tests/test_unique_ids.py:11:
1316
+ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
1317
+ .venv/lib/python3.12/site-packages/datasets/load.py:2151: in load_dataset
1318
+ builder_instance.download_and_prepare(
1319
+ .venv/lib/python3.12/site-packages/datasets/builder.py:924: in download_and_prepare
1320
+ self._download_and_prepare(
1321
+ .venv/lib/python3.12/site-packages/datasets/builder.py:1000: in _download_and_prepare
1322
+ self._prepare_split(split_generator, **prepare_split_kwargs)
1323
+ .venv/lib/python3.12/site-packages/datasets/builder.py:1741: in _prepare_split
1324
+ for job_id, done, content in self._prepare_split_single(
1325
+ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
1326
+
1327
+ self = <datasets.packaged_modules.parquet.parquet.ParquetDanish-dynaword object at 0x113ec1970>
1328
+ gen_kwargs = {'files': tracked_list(current=FilesIterable(current=/Users/au561649/Github/danish-dynaword/data/cellar/cellar.parquet))}
1329
+ fpath = '/Users/au561649/.cache/huggingface/datasets/danish-dynaword/default/0.0.0/5055500453bef830.incomplete/danish-dynaword-train-JJJJJ-SSSSS-of-NNNNN.arrow'
1330
+ file_format = 'arrow', max_shard_size = 500000000, job_id = 0
1331
+
1332
+ def _prepare_split_single(
1333
+ self, gen_kwargs: dict, fpath: str, file_format: str, max_shard_size: int, job_id: int
1334
+ ) -> Iterable[Tuple[int, bool, Union[int, tuple]]]:
1335
+ gen_kwargs = {k: tracked_list(v) if isinstance(v, list) else v for k, v in gen_kwargs.items()}
1336
+ generator = self._generate_tables(**gen_kwargs)
1337
+ writer_class = ParquetWriter if file_format == "parquet" else ArrowWriter
1338
+ embed_local_files = file_format == "parquet"
1339
+ shard_lengths = []
1340
+ total_num_examples, total_num_bytes = 0, 0
1341
+
1342
+ shard_id = 0
1343
+ num_examples_progress_update = 0
1344
+ try:
1345
+ writer = writer_class(
1346
+ features=self.info.features,
1347
+ path=fpath.replace("SSSSS", f"{shard_id:05d}").replace("JJJJJ", f"{job_id:05d}"),
1348
+ writer_batch_size=self._writer_batch_size,
1349
+ storage_options=self._fs.storage_options,
1350
+ embed_local_files=embed_local_files,
1351
+ )
1352
+ try:
1353
+ _time = time.time()
1354
+ for _, table in generator:
1355
+ if max_shard_size is not None and writer._num_bytes > max_shard_size:
1356
+ num_examples, num_bytes = writer.finalize()
1357
+ writer.close()
1358
+ shard_lengths.append(num_examples)
1359
+ total_num_examples += num_examples
1360
+ total_num_bytes += num_bytes
1361
+ shard_id += 1
1362
+ writer = writer_class(
1363
+ features=writer._features,
1364
+ path=fpath.replace("SSSSS", f"{shard_id:05d}").replace("JJJJJ", f"{job_id:05d}"),
1365
+ writer_batch_size=self._writer_batch_size,
1366
+ storage_options=self._fs.storage_options,
1367
+ embed_local_files=embed_local_files,
1368
+ )
1369
+ try:
1370
+ writer.write_table(table)
1371
+ except CastError as cast_error:
1372
+ raise DatasetGenerationCastError.from_cast_error(
1373
+ cast_error=cast_error,
1374
+ builder_name=self.info.builder_name,
1375
+ gen_kwargs=gen_kwargs,
1376
+ token=self.token,
1377
+ )
1378
+ num_examples_progress_update += len(table)
1379
+ if time.time() > _time + config.PBAR_REFRESH_TIME_INTERVAL:
1380
+ _time = time.time()
1381
+ yield job_id, False, num_examples_progress_update
1382
+ num_examples_progress_update = 0
1383
+ finally:
1384
+ yield job_id, False, num_examples_progress_update
1385
+ num_shards = shard_id + 1
1386
+ num_examples, num_bytes = writer.finalize()
1387
+ writer.close()
1388
+ shard_lengths.append(num_examples)
1389
+ total_num_examples += num_examples
1390
+ total_num_bytes += num_bytes
1391
+ except Exception as e:
1392
+ # Ignore the writer's error for no examples written to the file if this error was caused by the error in _generate_examples before the first example was yielded
1393
+ if isinstance(e, SchemaInferenceError) and e.__context__ is not None:
1394
+ e = e.__context__
1395
+ if isinstance(e, DatasetGenerationError):
1396
+ raise
1397
+ > raise DatasetGenerationError("An error occurred while generating the dataset") from e
1398
+ E datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset
1399
+
1400
+ .venv/lib/python3.12/site-packages/datasets/builder.py:1897: DatasetGenerationError
1401
+ ----------------------------- Captured stderr call -----------------------------
1402
+
1403
+
1404
  =============================== warnings summary ===============================
1405
+ src/tests/test_quality/test_short_texts.py: 33 warnings
1406
+ /Users/au561649/Github/danish-dynaword/.venv/lib/python3.12/site-packages/datasets/utils/_dill.py:385: DeprecationWarning: co_lnotab is deprecated, use co_lines instead.
1407
 
1408
  -- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
1409
+ =========================== short test summary info ============================
1410
+ FAILED src/tests/test_quality/test_duplicates.py::test_no_within_data_duplicates[ep]
1411
+ FAILED src/tests/test_quality/test_duplicates.py::test_no_within_data_duplicates[ft]
1412
+ FAILED src/tests/test_quality/test_duplicates.py::test_no_within_data_duplicates[tv2r]
1413
+ FAILED src/tests/test_quality/test_duplicates.py::test_no_within_data_duplicates[hest]
1414
+ FAILED src/tests/test_quality/test_short_texts.py::test_no_one_word_documents[ep]
1415
+ FAILED src/tests/test_quality/test_short_texts.py::test_no_one_word_documents[ft]
1416
+ FAILED src/tests/test_quality/test_short_texts.py::test_no_one_word_documents[hest]
1417
+ FAILED src/tests/test_unique_ids.py::test_ensure_ids_are_unique - datasets.ex...
1418
+ ====== 8 failed, 319 passed, 1 skipped, 33 warnings in 365.20s (0:06:05) =======