Datasets:

Modalities:
Image
Text
Formats:
parquet
Languages:
Danish
ArXiv:
DOI:
Libraries:
Datasets
Dask
License:
Kenneth Enevoldsen commited on
Commit
9db187b
·
unverified ·
1 Parent(s): 3d87e24

## [v1.2.6] - 2025-07-21

Browse files

### Added

- Added two table to get an overview of data by license and domain

### Changed

- Dataset overview table now appears in a drop down menu

README.md CHANGED
@@ -182,7 +182,7 @@ https://github.com/huggingface/datasets/blob/main/templates/README_guide.md
182
  <!-- START README TABLE -->
183
  | | |
184
  | ------------ | ----------------------------------------------------------------------------------------------------------------------------------------------------------- |
185
- | **Version** | 1.2.5 ([Changelog](/CHANGELOG.md)) |
186
  | **Language** | dan, dansk, Danish |
187
  | **License** | Openly Licensed, See the respective dataset |
188
  | **Models** | For model trained used this data see [danish-foundation-models](https://huggingface.co/danish-foundation-models) |
@@ -200,6 +200,7 @@ https://github.com/huggingface/datasets/blob/main/templates/README_guide.md
200
  - [Loading the dataset](#loading-the-dataset)
201
  - [Languages](#languages)
202
  - [Domains](#domains)
 
203
  - [Dataset Structure](#dataset-structure)
204
  - [Data Instances](#data-instances)
205
  - [Data Fields](#data-fields)
@@ -274,17 +275,15 @@ Language is denoted using [BCP-47](https://en.wikipedia.org/wiki/IETF_language_t
274
 
275
  ### Domains
276
 
277
- To give a structured overview of the dataset composition, we include three summary tables:
278
 
279
- - The Domain Table groups the datasets by domain (e.g., legal, books, social media) and shows the total token count for each domain.
280
- - The License Table categorizes the data by license type, providing transparency into the usage rights associated with each source.
281
- - The Main Table offers a detailed breakdown of each dataset, including a short description, its assigned domain, token count, and license.
282
 
283
- Each source is linked to a metadata card with additional information about origin, preprocessing, and license verification.
284
 
285
 
286
  <!-- START-DOMAIN TABLE -->
287
- | Domain | Source with link | N. Tokens |
288
  |:-------------|:---------------------------------------------------------------------------------------------------------|:------------|
289
  | Legal | [cellar], [eur-lex-sum-da], [fm-udgivelser], [retsinformationdk], [skat], [retspraksis], [domsdatabasen] | 2.32B |
290
  | Books | [ncc_books], [memo], [adl], [wikibooks], [jvj], [gutenberg], [relig] | 722.00M |
@@ -337,11 +336,73 @@ Each source is linked to a metadata card with additional information about origi
337
  [domsdatabasen]: data/domsdatabasen/domsdatabasen.md
338
  <!-- END-DOMAIN TABLE -->
339
 
 
 
 
340
 
341
  <p align="center">
342
  <img src="./images/domain_distribution.png" width="400" style="margin-right: 10px;" />
343
  </p>
344
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
345
  ## Dataset Structure
346
 
347
  The dataset contains text from different sources which are thoroughly defined in [Source Data](#source-data).
@@ -392,117 +453,8 @@ This data generally contains no annotation besides the metadata attached to each
392
 
393
  ### Source Data
394
 
395
- To give a structured overview of the dataset composition, we include three summary tables:
396
-
397
- - The Domain Table groups the datasets by domain (e.g., legal, books, social media) and shows the total token count for each domain.
398
- - The License Table categorizes the data by license type, providing transparency into the usage rights associated with each source.
399
- - The Main Table offers a detailed breakdown of each dataset, including a short description, its assigned domain, token count, and license.
400
-
401
- Each source is linked to a metadata card with additional information about origin, preprocessing, and license verification.
402
-
403
- **Domain Table**
404
- <!-- START-DOMAIN TABLE -->
405
- | Domain | Source with link | N. Tokens |
406
- |:-------------|:---------------------------------------------------------------------------------------------------------|:------------|
407
- | Legal | [cellar], [eur-lex-sum-da], [fm-udgivelser], [retsinformationdk], [skat], [retspraksis], [domsdatabasen] | 2.32B |
408
- | Books | [ncc_books], [memo], [adl], [wikibooks], [jvj], [gutenberg], [relig] | 722.00M |
409
- | Conversation | [danske-taler], [opensubtitles], [ep], [ft], [spont], [naat] | 497.09M |
410
- | Social Media | [hest] | 389.32M |
411
- | Other | [ncc_parliament], [dannet], [depbank], [synne] | 340.59M |
412
- | Web | [ai-aktindsigt], [ncc_maalfrid], [miljoeportalen] | 295.87M |
413
- | Encyclopedic | [wikisource], [wiki] | 127.35M |
414
- | News | [ncc_newspaper], [tv2r], [nordjyllandnews] | 60.63M |
415
- | Medical | [health_hovedstaden] | 27.07M |
416
- | Readaloud | [nota] | 7.30M |
417
- | Dialect | [botxt] | 847.97K |
418
- | **Total** | | 4.78B |
419
-
420
- [ai-aktindsigt]: data/ai-aktindsigt/ai-aktindsigt.md
421
- [cellar]: data/cellar/cellar.md
422
- [danske-taler]: data/danske-taler/danske-taler.md
423
- [ncc_books]: data/ncc_books/ncc_books.md
424
- [ncc_newspaper]: data/ncc_newspaper/ncc_newspaper.md
425
- [ncc_maalfrid]: data/ncc_maalfrid/ncc_maalfrid.md
426
- [ncc_parliament]: data/ncc_parliament/ncc_parliament.md
427
- [eur-lex-sum-da]: data/eur-lex-sum-da/eur-lex-sum-da.md
428
- [miljoeportalen]: data/miljoeportalen/miljoeportalen.md
429
- [fm-udgivelser]: data/fm-udgivelser/fm-udgivelser.md
430
- [memo]: data/memo/memo.md
431
- [opensubtitles]: data/opensubtitles/opensubtitles.md
432
- [retsinformationdk]: data/retsinformationdk/retsinformationdk.md
433
- [ep]: data/ep/ep.md
434
- [ft]: data/ft/ft.md
435
- [wikisource]: data/wikisource/wikisource.md
436
- [spont]: data/spont/spont.md
437
- [tv2r]: data/tv2r/tv2r.md
438
- [adl]: data/adl/adl.md
439
- [hest]: data/hest/hest.md
440
- [skat]: data/skat/skat.md
441
- [dannet]: data/dannet/dannet.md
442
- [retspraksis]: data/retspraksis/retspraksis.md
443
- [wikibooks]: data/wikibooks/wikibooks.md
444
- [jvj]: data/jvj/jvj.md
445
- [gutenberg]: data/gutenberg/gutenberg.md
446
- [botxt]: data/botxt/botxt.md
447
- [depbank]: data/depbank/depbank.md
448
- [naat]: data/naat/naat.md
449
- [synne]: data/synne/synne.md
450
- [wiki]: data/wiki/wiki.md
451
- [nordjyllandnews]: data/nordjyllandnews/nordjyllandnews.md
452
- [relig]: data/relig/relig.md
453
- [nota]: data/nota/nota.md
454
- [health_hovedstaden]: data/health_hovedstaden/health_hovedstaden.md
455
- [domsdatabasen]: data/domsdatabasen/domsdatabasen.md
456
- <!-- END-DOMAIN TABLE -->
457
-
458
- **License Table**
459
- <!-- START-LICENSE TABLE -->
460
- | License | Source with link | N. Tokens |
461
- |:-------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------|
462
- | cc0-1.0 | [danske-taler], [ncc_books], [ncc_newspaper], [miljoeportalen], [opensubtitles], [ep], [ft], [wikisource], [spont], [adl], [hest], [skat], [retspraksis], [wikibooks], [botxt], [naat], [synne], [wiki], [nordjyllandnews], [relig], [nota], [health_hovedstaden] | 1.99B |
463
- | cc-by-sa-4.0 | [cellar], [eur-lex-sum-da], [fm-udgivelser], [memo], [tv2r], [jvj], [depbank] | 1.37B |
464
- | other | [ncc_maalfrid], [ncc_parliament], [retsinformationdk], [dannet], [gutenberg], [domsdatabasen] | 1.28B |
465
- | apache-2.0 | [ai-aktindsigt] | 139.23M |
466
- | **Total** | | 4.78B |
467
-
468
- [ai-aktindsigt]: data/ai-aktindsigt/ai-aktindsigt.md
469
- [cellar]: data/cellar/cellar.md
470
- [danske-taler]: data/danske-taler/danske-taler.md
471
- [ncc_books]: data/ncc_books/ncc_books.md
472
- [ncc_newspaper]: data/ncc_newspaper/ncc_newspaper.md
473
- [ncc_maalfrid]: data/ncc_maalfrid/ncc_maalfrid.md
474
- [ncc_parliament]: data/ncc_parliament/ncc_parliament.md
475
- [eur-lex-sum-da]: data/eur-lex-sum-da/eur-lex-sum-da.md
476
- [miljoeportalen]: data/miljoeportalen/miljoeportalen.md
477
- [fm-udgivelser]: data/fm-udgivelser/fm-udgivelser.md
478
- [memo]: data/memo/memo.md
479
- [opensubtitles]: data/opensubtitles/opensubtitles.md
480
- [retsinformationdk]: data/retsinformationdk/retsinformationdk.md
481
- [ep]: data/ep/ep.md
482
- [ft]: data/ft/ft.md
483
- [wikisource]: data/wikisource/wikisource.md
484
- [spont]: data/spont/spont.md
485
- [tv2r]: data/tv2r/tv2r.md
486
- [adl]: data/adl/adl.md
487
- [hest]: data/hest/hest.md
488
- [skat]: data/skat/skat.md
489
- [dannet]: data/dannet/dannet.md
490
- [retspraksis]: data/retspraksis/retspraksis.md
491
- [wikibooks]: data/wikibooks/wikibooks.md
492
- [jvj]: data/jvj/jvj.md
493
- [gutenberg]: data/gutenberg/gutenberg.md
494
- [botxt]: data/botxt/botxt.md
495
- [depbank]: data/depbank/depbank.md
496
- [naat]: data/naat/naat.md
497
- [synne]: data/synne/synne.md
498
- [wiki]: data/wiki/wiki.md
499
- [nordjyllandnews]: data/nordjyllandnews/nordjyllandnews.md
500
- [relig]: data/relig/relig.md
501
- [nota]: data/nota/nota.md
502
- [health_hovedstaden]: data/health_hovedstaden/health_hovedstaden.md
503
- [domsdatabasen]: data/domsdatabasen/domsdatabasen.md
504
- <!-- END-LICENSE TABLE -->
505
 
 
506
 
507
  <details>
508
  <summary><b>Overview Table (click to unfold)</b></summary>
@@ -617,11 +569,7 @@ In addition to data specific processing we also run a series automated quality c
617
 
618
 
619
  ### Dataset Statistics
620
- The following plot show the domains distribution of the following within the dynaword:
621
-
622
- <p align="center">
623
- <img src="./images/domain_distribution.png" width="400" style="margin-right: 10px;" />
624
- </p>
625
 
626
  <details>
627
  <summary>Per dataset histograms</summary>
 
182
  <!-- START README TABLE -->
183
  | | |
184
  | ------------ | ----------------------------------------------------------------------------------------------------------------------------------------------------------- |
185
+ | **Version** | 1.2.6 ([Changelog](/CHANGELOG.md)) |
186
  | **Language** | dan, dansk, Danish |
187
  | **License** | Openly Licensed, See the respective dataset |
188
  | **Models** | For model trained used this data see [danish-foundation-models](https://huggingface.co/danish-foundation-models) |
 
200
  - [Loading the dataset](#loading-the-dataset)
201
  - [Languages](#languages)
202
  - [Domains](#domains)
203
+ - [Licensing](#licensing)
204
  - [Dataset Structure](#dataset-structure)
205
  - [Data Instances](#data-instances)
206
  - [Data Fields](#data-fields)
 
275
 
276
  ### Domains
277
 
278
+ This dynaword consist of data from various domains (e.g., legal, books, social media). The following table and figure give an overview of the relative distributions of these domains. To see a full overview of the source check out the [source data section](#source-data)
279
 
280
+ <div style="display: flex; gap: 20px; align-items: flex-start;">
 
 
281
 
282
+ <div style="flex: 1;">
283
 
284
 
285
  <!-- START-DOMAIN TABLE -->
286
+ | Domain | Sources | N. Tokens |
287
  |:-------------|:---------------------------------------------------------------------------------------------------------|:------------|
288
  | Legal | [cellar], [eur-lex-sum-da], [fm-udgivelser], [retsinformationdk], [skat], [retspraksis], [domsdatabasen] | 2.32B |
289
  | Books | [ncc_books], [memo], [adl], [wikibooks], [jvj], [gutenberg], [relig] | 722.00M |
 
336
  [domsdatabasen]: data/domsdatabasen/domsdatabasen.md
337
  <!-- END-DOMAIN TABLE -->
338
 
339
+ </div>
340
+
341
+ <div style="flex: 1;">
342
 
343
  <p align="center">
344
  <img src="./images/domain_distribution.png" width="400" style="margin-right: 10px;" />
345
  </p>
346
 
347
+ </div>
348
+
349
+ </div>
350
+
351
+
352
+ ### Licensing
353
+
354
+ The following gives an overview of the licensing in the Dynaword. To get the exact license of the individual datasets check out the [overview table](#source-data).
355
+ These license is applied to the constituent data, i.e., the text. The collection of datasets (metadata, quality control, etc.) is licensed under [CC-0](https://creativecommons.org/publicdomain/zero/1.0/legalcode.en).
356
+
357
+ <!-- START-LICENSE TABLE -->
358
+ | License | Sources | N. Tokens |
359
+ |:--------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------|
360
+ | CC-0 | [danske-taler], [ncc_books], [ncc_newspaper], [miljoeportalen], [opensubtitles], [ep], [ft], [wikisource], [spont], [adl], [hest], [skat], [retspraksis], [wikibooks], [botxt], [naat], [synne], [wiki], [nordjyllandnews], [relig], [nota], [health_hovedstaden] | 1.99B |
361
+ | CC-BY-SA 4.0 | [cellar], [eur-lex-sum-da], [fm-udgivelser], [memo], [tv2r], [jvj], [depbank] | 1.37B |
362
+ | Other (No attribution required) | [retsinformationdk], [domsdatabasen] | 904.61M |
363
+ | Other (Attribution required) | [ai-aktindsigt], [ncc_maalfrid], [ncc_parliament], [dannet], [gutenberg] | 515.61M |
364
+ | **Total** | | 4.78B |
365
+
366
+ [ai-aktindsigt]: data/ai-aktindsigt/ai-aktindsigt.md
367
+ [cellar]: data/cellar/cellar.md
368
+ [danske-taler]: data/danske-taler/danske-taler.md
369
+ [ncc_books]: data/ncc_books/ncc_books.md
370
+ [ncc_newspaper]: data/ncc_newspaper/ncc_newspaper.md
371
+ [ncc_maalfrid]: data/ncc_maalfrid/ncc_maalfrid.md
372
+ [ncc_parliament]: data/ncc_parliament/ncc_parliament.md
373
+ [eur-lex-sum-da]: data/eur-lex-sum-da/eur-lex-sum-da.md
374
+ [miljoeportalen]: data/miljoeportalen/miljoeportalen.md
375
+ [fm-udgivelser]: data/fm-udgivelser/fm-udgivelser.md
376
+ [memo]: data/memo/memo.md
377
+ [opensubtitles]: data/opensubtitles/opensubtitles.md
378
+ [retsinformationdk]: data/retsinformationdk/retsinformationdk.md
379
+ [ep]: data/ep/ep.md
380
+ [ft]: data/ft/ft.md
381
+ [wikisource]: data/wikisource/wikisource.md
382
+ [spont]: data/spont/spont.md
383
+ [tv2r]: data/tv2r/tv2r.md
384
+ [adl]: data/adl/adl.md
385
+ [hest]: data/hest/hest.md
386
+ [skat]: data/skat/skat.md
387
+ [dannet]: data/dannet/dannet.md
388
+ [retspraksis]: data/retspraksis/retspraksis.md
389
+ [wikibooks]: data/wikibooks/wikibooks.md
390
+ [jvj]: data/jvj/jvj.md
391
+ [gutenberg]: data/gutenberg/gutenberg.md
392
+ [botxt]: data/botxt/botxt.md
393
+ [depbank]: data/depbank/depbank.md
394
+ [naat]: data/naat/naat.md
395
+ [synne]: data/synne/synne.md
396
+ [wiki]: data/wiki/wiki.md
397
+ [nordjyllandnews]: data/nordjyllandnews/nordjyllandnews.md
398
+ [relig]: data/relig/relig.md
399
+ [nota]: data/nota/nota.md
400
+ [health_hovedstaden]: data/health_hovedstaden/health_hovedstaden.md
401
+ [domsdatabasen]: data/domsdatabasen/domsdatabasen.md
402
+ <!-- END-LICENSE TABLE -->
403
+
404
+
405
+
406
  ## Dataset Structure
407
 
408
  The dataset contains text from different sources which are thoroughly defined in [Source Data](#source-data).
 
453
 
454
  ### Source Data
455
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
456
 
457
+ Below follows a brief overview of the sources in the corpus along with their individual license. To get more information about the individual dataset click the hyperlink in the table.
458
 
459
  <details>
460
  <summary><b>Overview Table (click to unfold)</b></summary>
 
569
 
570
 
571
  ### Dataset Statistics
572
+ The following plot pr. dataset histograms displaying document lengths.
 
 
 
 
573
 
574
  <details>
575
  <summary>Per dataset histograms</summary>
descriptive_stats.json CHANGED
@@ -2,5 +2,5 @@
2
  "number_of_samples": 960357,
3
  "average_document_length": 15301.724414983179,
4
  "number_of_tokens": 4784823570,
5
- "revision": "7754e7a4c0a463ab5aec340a6e836c789f01bced"
6
  }
 
2
  "number_of_samples": 960357,
3
  "average_document_length": 15301.724414983179,
4
  "number_of_tokens": 4784823570,
5
+ "revision": "3d87e24d35c186fbb994478238e7ccba03a4d8a2"
6
  }
images/domain_distribution.png CHANGED

Git LFS Details

  • SHA256: 890c8e11f2aa66a2e945ba675a9fcd17bf79366aebeab2e3dd9040f915a8760d
  • Pointer size: 131 Bytes
  • Size of remote file: 344 kB

Git LFS Details

  • SHA256: 6596b9ce03800525cc99af5accfd0e84e8c4e14259e20e74039a237b7ab4eec9
  • Pointer size: 131 Bytes
  • Size of remote file: 338 kB
images/tokens_over_time.html CHANGED
@@ -2,6 +2,6 @@
2
  <head><meta charset="utf-8" /></head>
3
  <body>
4
  <div> <script type="text/javascript">window.PlotlyConfig = {MathJaxConfig: 'local'};</script>
5
- <script charset="utf-8" src="https://cdn.plot.ly/plotly-3.0.1.min.js"></script> <div id="7f623fa0-e1e2-4a75-a331-279a360958ad" class="plotly-graph-div" style="height:400px; width:600px;"></div> <script type="text/javascript"> window.PLOTLYENV=window.PLOTLYENV || {}; if (document.getElementById("7f623fa0-e1e2-4a75-a331-279a360958ad")) { Plotly.newPlot( "7f623fa0-e1e2-4a75-a331-279a360958ad", [{"hovertemplate":"%{text}\u003cextra\u003e\u003c\u002fextra\u003e","line":{"color":"#DC2626","width":3},"marker":{"color":"#DC2626","size":5},"mode":"lines+markers","name":"Tokens","text":["Date: 2025-01-02\u003cbr\u003eTokens: 1.57G\u003cbr\u003eSamples: 546,769\u003cbr\u003eCommit: 9c15515d\u003cbr\u003eMessage: Added number of llama3 tokens to desc stats","Date: 2025-01-03\u003cbr\u003eTokens: 1.84G\u003cbr\u003eChange: +271.89M\u003cbr\u003eSamples: 576,589\u003cbr\u003eCommit: 38b692a5\u003cbr\u003eMessage: Added automatically updated samples to update_descriptive_stats.py","Date: 2025-01-04\u003cbr\u003eTokens: 1.84G\u003cbr\u003eChange: +0\u003cbr\u003eSamples: 576,589\u003cbr\u003eCommit: 546c3b35\u003cbr\u003eMessage: update opensubtitles","Date: 2025-01-05\u003cbr\u003eTokens: 1.84G\u003cbr\u003eChange: +5.40M\u003cbr\u003eSamples: 588,476\u003cbr\u003eCommit: 0cef3177\u003cbr\u003eMessage: Added distribution plot for number of tokens","Date: 2025-02-10\u003cbr\u003eTokens: 1.85G\u003cbr\u003eChange: +7.30M\u003cbr\u003eSamples: 588,922\u003cbr\u003eCommit: 97b3aa5d\u003cbr\u003eMessage: Add Nota-tekster (#41)","Date: 2025-03-10\u003cbr\u003eTokens: 1.85G\u003cbr\u003eChange: +0\u003cbr\u003eSamples: 588,922\u003cbr\u003eCommit: 5affec72\u003cbr\u003eMessage: add_memo (#42)","Date: 2025-04-29\u003cbr\u003eTokens: 3.36G\u003cbr\u003eChange: +1.51G\u003cbr\u003eSamples: 846,387\u003cbr\u003eCommit: 65faa6e2\u003cbr\u003eMessage: a lot of improvements","Date: 2025-04-29\u003cbr\u003eTokens: 3.36G\u003cbr\u003eChange: +0\u003cbr\u003eSamples: 846,387\u003cbr\u003eCommit: 43d839aa\u003cbr\u003eMessage: updates sheets","Date: 2025-04-29\u003cbr\u003eTokens: 3.36G\u003cbr\u003eChange: +0\u003cbr\u003eSamples: 846,387\u003cbr\u003eCommit: 060c4430\u003cbr\u003eMessage: Updated changelog","Date: 2025-04-29\u003cbr\u003eTokens: 3.36G\u003cbr\u003eChange: +0\u003cbr\u003eSamples: 846,387\u003cbr\u003eCommit: c9397c44\u003cbr\u003eMessage: reformatted the readme","Date: 2025-05-12\u003cbr\u003eTokens: 4.26G\u003cbr\u003eChange: +901.15M\u003cbr\u003eSamples: 891,075\u003cbr\u003eCommit: 2453a15a\u003cbr\u003eMessage: updated datasheet","Date: 2025-05-12\u003cbr\u003eTokens: 4.26G\u003cbr\u003eChange: +0\u003cbr\u003eSamples: 891,075\u003cbr\u003eCommit: 91cd694a\u003cbr\u003eMessage: docs: minor fixes to datasheets","Date: 2025-05-12\u003cbr\u003eTokens: 4.26G\u003cbr\u003eChange: +0\u003cbr\u003eSamples: 891,075\u003cbr\u003eCommit: d36009a4\u003cbr\u003eMessage: update desc stats","Date: 2025-06-23\u003cbr\u003eTokens: 4.37G\u003cbr\u003eChange: +104.46M\u003cbr\u003eSamples: 891,094\u003cbr\u003eCommit: 16931a4c\u003cbr\u003eMessage: Fix memo (#68)","Date: 2025-06-25\u003cbr\u003eTokens: 4.37G\u003cbr\u003eChange: +581.06k\u003cbr\u003eSamples: 891,348\u003cbr\u003eCommit: 2c91001b\u003cbr\u003eMessage: Fix Danske Taler (#69)","Date: 2025-06-30\u003cbr\u003eTokens: 4.40G\u003cbr\u003eChange: +26.49M\u003cbr\u003eSamples: 915,090\u003cbr\u003eCommit: 7df022e7\u003cbr\u003eMessage: Adding Scrape Hovedstaden (#70)","Date: 2025-07-01\u003cbr\u003eTokens: 4.70G\u003cbr\u003eChange: +302.40M\u003cbr\u003eSamples: 951,889\u003cbr\u003eCommit: 6a2c8fbf\u003cbr\u003eMessage: update-retsinformationdk (#72)","Date: 2025-07-08\u003cbr\u003eTokens: 4.70G\u003cbr\u003eChange: +0\u003cbr\u003eSamples: 951,889\u003cbr\u003eCommit: 0cdc88c0\u003cbr\u003eMessage: Add tokens over time (+ rename scrape_hovedstaten) (#73)","Date: 2025-07-11\u003cbr\u003eTokens: 4.78G\u003cbr\u003eChange: +86.35M\u003cbr\u003eSamples: 960,357\u003cbr\u003eCommit: dd36adfe\u003cbr\u003eMessage: Add domsdatabasen (#74)"],"x":["2025-01-02T00:00:00.000000000","2025-01-03T00:00:00.000000000","2025-01-04T00:00:00.000000000","2025-01-05T00:00:00.000000000","2025-02-10T00:00:00.000000000","2025-03-10T00:00:00.000000000","2025-04-29T00:00:00.000000000","2025-04-29T00:00:00.000000000","2025-04-29T00:00:00.000000000","2025-04-29T00:00:00.000000000","2025-05-12T00:00:00.000000000","2025-05-12T00:00:00.000000000","2025-05-12T00:00:00.000000000","2025-06-23T00:00:00.000000000","2025-06-25T00:00:00.000000000","2025-06-30T00:00:00.000000000","2025-07-01T00:00:00.000000000","2025-07-08T00:00:00.000000000","2025-07-11T00:00:00.000000000"],"y":[1567706760,1839599769,1839599769,1844994816,1852293828,1852293828,3363395483,3363395483,3363395483,3363395483,4264549097,4264549097,4264549097,4369008328,4369589385,4396075044,4698470546,4698470546,4784823570],"type":"scatter"}], {"template":{"data":{"histogram2dcontour":[{"type":"histogram2dcontour","colorbar":{"outlinewidth":0,"ticks":""},"colorscale":[[0.0,"#0d0887"],[0.1111111111111111,"#46039f"],[0.2222222222222222,"#7201a8"],[0.3333333333333333,"#9c179e"],[0.4444444444444444,"#bd3786"],[0.5555555555555556,"#d8576b"],[0.6666666666666666,"#ed7953"],[0.7777777777777778,"#fb9f3a"],[0.8888888888888888,"#fdca26"],[1.0,"#f0f921"]]}],"choropleth":[{"type":"choropleth","colorbar":{"outlinewidth":0,"ticks":""}}],"histogram2d":[{"type":"histogram2d","colorbar":{"outlinewidth":0,"ticks":""},"colorscale":[[0.0,"#0d0887"],[0.1111111111111111,"#46039f"],[0.2222222222222222,"#7201a8"],[0.3333333333333333,"#9c179e"],[0.4444444444444444,"#bd3786"],[0.5555555555555556,"#d8576b"],[0.6666666666666666,"#ed7953"],[0.7777777777777778,"#fb9f3a"],[0.8888888888888888,"#fdca26"],[1.0,"#f0f921"]]}],"heatmap":[{"type":"heatmap","colorbar":{"outlinewidth":0,"ticks":""},"colorscale":[[0.0,"#0d0887"],[0.1111111111111111,"#46039f"],[0.2222222222222222,"#7201a8"],[0.3333333333333333,"#9c179e"],[0.4444444444444444,"#bd3786"],[0.5555555555555556,"#d8576b"],[0.6666666666666666,"#ed7953"],[0.7777777777777778,"#fb9f3a"],[0.8888888888888888,"#fdca26"],[1.0,"#f0f921"]]}],"contourcarpet":[{"type":"contourcarpet","colorbar":{"outlinewidth":0,"ticks":""}}],"contour":[{"type":"contour","colorbar":{"outlinewidth":0,"ticks":""},"colorscale":[[0.0,"#0d0887"],[0.1111111111111111,"#46039f"],[0.2222222222222222,"#7201a8"],[0.3333333333333333,"#9c179e"],[0.4444444444444444,"#bd3786"],[0.5555555555555556,"#d8576b"],[0.6666666666666666,"#ed7953"],[0.7777777777777778,"#fb9f3a"],[0.8888888888888888,"#fdca26"],[1.0,"#f0f921"]]}],"surface":[{"type":"surface","colorbar":{"outlinewidth":0,"ticks":""},"colorscale":[[0.0,"#0d0887"],[0.1111111111111111,"#46039f"],[0.2222222222222222,"#7201a8"],[0.3333333333333333,"#9c179e"],[0.4444444444444444,"#bd3786"],[0.5555555555555556,"#d8576b"],[0.6666666666666666,"#ed7953"],[0.7777777777777778,"#fb9f3a"],[0.8888888888888888,"#fdca26"],[1.0,"#f0f921"]]}],"mesh3d":[{"type":"mesh3d","colorbar":{"outlinewidth":0,"ticks":""}}],"scatter":[{"fillpattern":{"fillmode":"overlay","size":10,"solidity":0.2},"type":"scatter"}],"parcoords":[{"type":"parcoords","line":{"colorbar":{"outlinewidth":0,"ticks":""}}}],"scatterpolargl":[{"type":"scatterpolargl","marker":{"colorbar":{"outlinewidth":0,"ticks":""}}}],"bar":[{"error_x":{"color":"#2a3f5f"},"error_y":{"color":"#2a3f5f"},"marker":{"line":{"color":"#E5ECF6","width":0.5},"pattern":{"fillmode":"overlay","size":10,"solidity":0.2}},"type":"bar"}],"scattergeo":[{"type":"scattergeo","marker":{"colorbar":{"outlinewidth":0,"ticks":""}}}],"scatterpolar":[{"type":"scatterpolar","marker":{"colorbar":{"outlinewidth":0,"ticks":""}}}],"histogram":[{"marker":{"pattern":{"fillmode":"overlay","size":10,"solidity":0.2}},"type":"histogram"}],"scattergl":[{"type":"scattergl","marker":{"colorbar":{"outlinewidth":0,"ticks":""}}}],"scatter3d":[{"type":"scatter3d","line":{"colorbar":{"outlinewidth":0,"ticks":""}},"marker":{"colorbar":{"outlinewidth":0,"ticks":""}}}],"scattermap":[{"type":"scattermap","marker":{"colorbar":{"outlinewidth":0,"ticks":""}}}],"scattermapbox":[{"type":"scattermapbox","marker":{"colorbar":{"outlinewidth":0,"ticks":""}}}],"scatterternary":[{"type":"scatterternary","marker":{"colorbar":{"outlinewidth":0,"ticks":""}}}],"scattercarpet":[{"type":"scattercarpet","marker":{"colorbar":{"outlinewidth":0,"ticks":""}}}],"carpet":[{"aaxis":{"endlinecolor":"#2a3f5f","gridcolor":"white","linecolor":"white","minorgridcolor":"white","startlinecolor":"#2a3f5f"},"baxis":{"endlinecolor":"#2a3f5f","gridcolor":"white","linecolor":"white","minorgridcolor":"white","startlinecolor":"#2a3f5f"},"type":"carpet"}],"table":[{"cells":{"fill":{"color":"#EBF0F8"},"line":{"color":"white"}},"header":{"fill":{"color":"#C8D4E3"},"line":{"color":"white"}},"type":"table"}],"barpolar":[{"marker":{"line":{"color":"#E5ECF6","width":0.5},"pattern":{"fillmode":"overlay","size":10,"solidity":0.2}},"type":"barpolar"}],"pie":[{"automargin":true,"type":"pie"}]},"layout":{"autotypenumbers":"strict","colorway":["#636efa","#EF553B","#00cc96","#ab63fa","#FFA15A","#19d3f3","#FF6692","#B6E880","#FF97FF","#FECB52"],"font":{"color":"#2a3f5f"},"hovermode":"closest","hoverlabel":{"align":"left"},"paper_bgcolor":"white","plot_bgcolor":"#E5ECF6","polar":{"bgcolor":"#E5ECF6","angularaxis":{"gridcolor":"white","linecolor":"white","ticks":""},"radialaxis":{"gridcolor":"white","linecolor":"white","ticks":""}},"ternary":{"bgcolor":"#E5ECF6","aaxis":{"gridcolor":"white","linecolor":"white","ticks":""},"baxis":{"gridcolor":"white","linecolor":"white","ticks":""},"caxis":{"gridcolor":"white","linecolor":"white","ticks":""}},"coloraxis":{"colorbar":{"outlinewidth":0,"ticks":""}},"colorscale":{"sequential":[[0.0,"#0d0887"],[0.1111111111111111,"#46039f"],[0.2222222222222222,"#7201a8"],[0.3333333333333333,"#9c179e"],[0.4444444444444444,"#bd3786"],[0.5555555555555556,"#d8576b"],[0.6666666666666666,"#ed7953"],[0.7777777777777778,"#fb9f3a"],[0.8888888888888888,"#fdca26"],[1.0,"#f0f921"]],"sequentialminus":[[0.0,"#0d0887"],[0.1111111111111111,"#46039f"],[0.2222222222222222,"#7201a8"],[0.3333333333333333,"#9c179e"],[0.4444444444444444,"#bd3786"],[0.5555555555555556,"#d8576b"],[0.6666666666666666,"#ed7953"],[0.7777777777777778,"#fb9f3a"],[0.8888888888888888,"#fdca26"],[1.0,"#f0f921"]],"diverging":[[0,"#8e0152"],[0.1,"#c51b7d"],[0.2,"#de77ae"],[0.3,"#f1b6da"],[0.4,"#fde0ef"],[0.5,"#f7f7f7"],[0.6,"#e6f5d0"],[0.7,"#b8e186"],[0.8,"#7fbc41"],[0.9,"#4d9221"],[1,"#276419"]]},"xaxis":{"gridcolor":"white","linecolor":"white","ticks":"","title":{"standoff":15},"zerolinecolor":"white","automargin":true,"zerolinewidth":2},"yaxis":{"gridcolor":"white","linecolor":"white","ticks":"","title":{"standoff":15},"zerolinecolor":"white","automargin":true,"zerolinewidth":2},"scene":{"xaxis":{"backgroundcolor":"#E5ECF6","gridcolor":"white","linecolor":"white","showbackground":true,"ticks":"","zerolinecolor":"white","gridwidth":2},"yaxis":{"backgroundcolor":"#E5ECF6","gridcolor":"white","linecolor":"white","showbackground":true,"ticks":"","zerolinecolor":"white","gridwidth":2},"zaxis":{"backgroundcolor":"#E5ECF6","gridcolor":"white","linecolor":"white","showbackground":true,"ticks":"","zerolinecolor":"white","gridwidth":2}},"shapedefaults":{"line":{"color":"#2a3f5f"}},"annotationdefaults":{"arrowcolor":"#2a3f5f","arrowhead":0,"arrowwidth":1},"geo":{"bgcolor":"white","landcolor":"#E5ECF6","subunitcolor":"white","showland":true,"showlakes":true,"lakecolor":"white"},"title":{"x":0.05},"mapbox":{"style":"light"}}},"shapes":[{"line":{"color":"gray","dash":"dash","width":1},"type":"line","x0":0,"x1":1,"xref":"x domain","y0":300000000,"y1":300000000,"yref":"y"},{"line":{"color":"gray","dash":"dash","width":1},"type":"line","x0":0,"x1":1,"xref":"x domain","y0":1000000000,"y1":1000000000,"yref":"y"}],"annotations":[{"font":{"color":"gray","size":12},"showarrow":false,"text":"Common Corpus (dan) (Langlais et al., 2025)","x":0,"xanchor":"left","xref":"x domain","y":300000000,"yanchor":"bottom","yref":"y"},{"font":{"color":"gray","size":12},"showarrow":false,"text":"Danish Gigaword (Derczynski et al., 2021)","x":0,"xanchor":"left","xref":"x domain","y":1000000000,"yanchor":"bottom","yref":"y"}],"title":{"text":"Number of Tokens Over Time in Danish Dynaword"},"xaxis":{"title":{"text":"Date"}},"yaxis":{"title":{"text":"Number of Tokens (Llama 3)"},"tickformat":".2s","ticksuffix":""},"hovermode":"closest","width":600,"height":400,"showlegend":false,"plot_bgcolor":"rgba(0,0,0,0)","paper_bgcolor":"rgba(0,0,0,0)"}, {"responsive": true} ) }; </script> </div>
6
  </body>
7
  </html>
 
2
  <head><meta charset="utf-8" /></head>
3
  <body>
4
  <div> <script type="text/javascript">window.PlotlyConfig = {MathJaxConfig: 'local'};</script>
5
+ <script charset="utf-8" src="https://cdn.plot.ly/plotly-3.0.1.min.js"></script> <div id="5b08e3f1-a9bd-44ac-afab-f0bde51525e6" class="plotly-graph-div" style="height:400px; width:600px;"></div> <script type="text/javascript"> window.PLOTLYENV=window.PLOTLYENV || {}; if (document.getElementById("5b08e3f1-a9bd-44ac-afab-f0bde51525e6")) { Plotly.newPlot( "5b08e3f1-a9bd-44ac-afab-f0bde51525e6", [{"hovertemplate":"%{text}\u003cextra\u003e\u003c\u002fextra\u003e","line":{"color":"#DC2626","width":3},"marker":{"color":"#DC2626","size":5},"mode":"lines+markers","name":"Tokens","text":["Date: 2025-01-02\u003cbr\u003eTokens: 1.57G\u003cbr\u003eSamples: 546,769\u003cbr\u003eCommit: 9c15515d\u003cbr\u003eMessage: Added number of llama3 tokens to desc stats","Date: 2025-01-03\u003cbr\u003eTokens: 1.84G\u003cbr\u003eChange: +271.89M\u003cbr\u003eSamples: 576,589\u003cbr\u003eCommit: 38b692a5\u003cbr\u003eMessage: Added automatically updated samples to update_descriptive_stats.py","Date: 2025-01-04\u003cbr\u003eTokens: 1.84G\u003cbr\u003eChange: +0\u003cbr\u003eSamples: 576,589\u003cbr\u003eCommit: 546c3b35\u003cbr\u003eMessage: update opensubtitles","Date: 2025-01-05\u003cbr\u003eTokens: 1.84G\u003cbr\u003eChange: +5.40M\u003cbr\u003eSamples: 588,476\u003cbr\u003eCommit: 0cef3177\u003cbr\u003eMessage: Added distribution plot for number of tokens","Date: 2025-02-10\u003cbr\u003eTokens: 1.85G\u003cbr\u003eChange: +7.30M\u003cbr\u003eSamples: 588,922\u003cbr\u003eCommit: 97b3aa5d\u003cbr\u003eMessage: Add Nota-tekster (#41)","Date: 2025-03-10\u003cbr\u003eTokens: 1.85G\u003cbr\u003eChange: +0\u003cbr\u003eSamples: 588,922\u003cbr\u003eCommit: 5affec72\u003cbr\u003eMessage: add_memo (#42)","Date: 2025-04-29\u003cbr\u003eTokens: 3.36G\u003cbr\u003eChange: +1.51G\u003cbr\u003eSamples: 846,387\u003cbr\u003eCommit: 65faa6e2\u003cbr\u003eMessage: a lot of improvements","Date: 2025-04-29\u003cbr\u003eTokens: 3.36G\u003cbr\u003eChange: +0\u003cbr\u003eSamples: 846,387\u003cbr\u003eCommit: 43d839aa\u003cbr\u003eMessage: updates sheets","Date: 2025-04-29\u003cbr\u003eTokens: 3.36G\u003cbr\u003eChange: +0\u003cbr\u003eSamples: 846,387\u003cbr\u003eCommit: 060c4430\u003cbr\u003eMessage: Updated changelog","Date: 2025-04-29\u003cbr\u003eTokens: 3.36G\u003cbr\u003eChange: +0\u003cbr\u003eSamples: 846,387\u003cbr\u003eCommit: c9397c44\u003cbr\u003eMessage: reformatted the readme","Date: 2025-05-12\u003cbr\u003eTokens: 4.26G\u003cbr\u003eChange: +901.15M\u003cbr\u003eSamples: 891,075\u003cbr\u003eCommit: d36009a4\u003cbr\u003eMessage: update desc stats","Date: 2025-05-12\u003cbr\u003eTokens: 4.26G\u003cbr\u003eChange: +0\u003cbr\u003eSamples: 891,075\u003cbr\u003eCommit: 91cd694a\u003cbr\u003eMessage: docs: minor fixes to datasheets","Date: 2025-05-12\u003cbr\u003eTokens: 4.26G\u003cbr\u003eChange: +0\u003cbr\u003eSamples: 891,075\u003cbr\u003eCommit: 2453a15a\u003cbr\u003eMessage: updated datasheet","Date: 2025-06-23\u003cbr\u003eTokens: 4.37G\u003cbr\u003eChange: +104.46M\u003cbr\u003eSamples: 891,094\u003cbr\u003eCommit: 16931a4c\u003cbr\u003eMessage: Fix memo (#68)","Date: 2025-06-25\u003cbr\u003eTokens: 4.37G\u003cbr\u003eChange: +581.06k\u003cbr\u003eSamples: 891,348\u003cbr\u003eCommit: 2c91001b\u003cbr\u003eMessage: Fix Danske Taler (#69)","Date: 2025-06-30\u003cbr\u003eTokens: 4.40G\u003cbr\u003eChange: +26.49M\u003cbr\u003eSamples: 915,090\u003cbr\u003eCommit: 7df022e7\u003cbr\u003eMessage: Adding Scrape Hovedstaden (#70)","Date: 2025-07-01\u003cbr\u003eTokens: 4.70G\u003cbr\u003eChange: +302.40M\u003cbr\u003eSamples: 951,889\u003cbr\u003eCommit: 6a2c8fbf\u003cbr\u003eMessage: update-retsinformationdk (#72)","Date: 2025-07-08\u003cbr\u003eTokens: 4.70G\u003cbr\u003eChange: +0\u003cbr\u003eSamples: 951,889\u003cbr\u003eCommit: 0cdc88c0\u003cbr\u003eMessage: Add tokens over time (+ rename scrape_hovedstaten) (#73)","Date: 2025-07-11\u003cbr\u003eTokens: 4.78G\u003cbr\u003eChange: +86.35M\u003cbr\u003eSamples: 960,357\u003cbr\u003eCommit: dd36adfe\u003cbr\u003eMessage: Add domsdatabasen (#74)","Date: 2025-07-21\u003cbr\u003eTokens: 4.78G\u003cbr\u003eChange: +0\u003cbr\u003eSamples: 960,357\u003cbr\u003eCommit: d06be7ce\u003cbr\u003eMessage: Updating readme and graphs after merging with main."],"x":["2025-01-02T00:00:00.000000000","2025-01-03T00:00:00.000000000","2025-01-04T00:00:00.000000000","2025-01-05T00:00:00.000000000","2025-02-10T00:00:00.000000000","2025-03-10T00:00:00.000000000","2025-04-29T00:00:00.000000000","2025-04-29T00:00:00.000000000","2025-04-29T00:00:00.000000000","2025-04-29T00:00:00.000000000","2025-05-12T00:00:00.000000000","2025-05-12T00:00:00.000000000","2025-05-12T00:00:00.000000000","2025-06-23T00:00:00.000000000","2025-06-25T00:00:00.000000000","2025-06-30T00:00:00.000000000","2025-07-01T00:00:00.000000000","2025-07-08T00:00:00.000000000","2025-07-11T00:00:00.000000000","2025-07-21T00:00:00.000000000"],"y":[1567706760,1839599769,1839599769,1844994816,1852293828,1852293828,3363395483,3363395483,3363395483,3363395483,4264549097,4264549097,4264549097,4369008328,4369589385,4396075044,4698470546,4698470546,4784823570,4784823570],"type":"scatter"}], {"template":{"data":{"histogram2dcontour":[{"type":"histogram2dcontour","colorbar":{"outlinewidth":0,"ticks":""},"colorscale":[[0.0,"#0d0887"],[0.1111111111111111,"#46039f"],[0.2222222222222222,"#7201a8"],[0.3333333333333333,"#9c179e"],[0.4444444444444444,"#bd3786"],[0.5555555555555556,"#d8576b"],[0.6666666666666666,"#ed7953"],[0.7777777777777778,"#fb9f3a"],[0.8888888888888888,"#fdca26"],[1.0,"#f0f921"]]}],"choropleth":[{"type":"choropleth","colorbar":{"outlinewidth":0,"ticks":""}}],"histogram2d":[{"type":"histogram2d","colorbar":{"outlinewidth":0,"ticks":""},"colorscale":[[0.0,"#0d0887"],[0.1111111111111111,"#46039f"],[0.2222222222222222,"#7201a8"],[0.3333333333333333,"#9c179e"],[0.4444444444444444,"#bd3786"],[0.5555555555555556,"#d8576b"],[0.6666666666666666,"#ed7953"],[0.7777777777777778,"#fb9f3a"],[0.8888888888888888,"#fdca26"],[1.0,"#f0f921"]]}],"heatmap":[{"type":"heatmap","colorbar":{"outlinewidth":0,"ticks":""},"colorscale":[[0.0,"#0d0887"],[0.1111111111111111,"#46039f"],[0.2222222222222222,"#7201a8"],[0.3333333333333333,"#9c179e"],[0.4444444444444444,"#bd3786"],[0.5555555555555556,"#d8576b"],[0.6666666666666666,"#ed7953"],[0.7777777777777778,"#fb9f3a"],[0.8888888888888888,"#fdca26"],[1.0,"#f0f921"]]}],"contourcarpet":[{"type":"contourcarpet","colorbar":{"outlinewidth":0,"ticks":""}}],"contour":[{"type":"contour","colorbar":{"outlinewidth":0,"ticks":""},"colorscale":[[0.0,"#0d0887"],[0.1111111111111111,"#46039f"],[0.2222222222222222,"#7201a8"],[0.3333333333333333,"#9c179e"],[0.4444444444444444,"#bd3786"],[0.5555555555555556,"#d8576b"],[0.6666666666666666,"#ed7953"],[0.7777777777777778,"#fb9f3a"],[0.8888888888888888,"#fdca26"],[1.0,"#f0f921"]]}],"surface":[{"type":"surface","colorbar":{"outlinewidth":0,"ticks":""},"colorscale":[[0.0,"#0d0887"],[0.1111111111111111,"#46039f"],[0.2222222222222222,"#7201a8"],[0.3333333333333333,"#9c179e"],[0.4444444444444444,"#bd3786"],[0.5555555555555556,"#d8576b"],[0.6666666666666666,"#ed7953"],[0.7777777777777778,"#fb9f3a"],[0.8888888888888888,"#fdca26"],[1.0,"#f0f921"]]}],"mesh3d":[{"type":"mesh3d","colorbar":{"outlinewidth":0,"ticks":""}}],"scatter":[{"fillpattern":{"fillmode":"overlay","size":10,"solidity":0.2},"type":"scatter"}],"parcoords":[{"type":"parcoords","line":{"colorbar":{"outlinewidth":0,"ticks":""}}}],"scatterpolargl":[{"type":"scatterpolargl","marker":{"colorbar":{"outlinewidth":0,"ticks":""}}}],"bar":[{"error_x":{"color":"#2a3f5f"},"error_y":{"color":"#2a3f5f"},"marker":{"line":{"color":"#E5ECF6","width":0.5},"pattern":{"fillmode":"overlay","size":10,"solidity":0.2}},"type":"bar"}],"scattergeo":[{"type":"scattergeo","marker":{"colorbar":{"outlinewidth":0,"ticks":""}}}],"scatterpolar":[{"type":"scatterpolar","marker":{"colorbar":{"outlinewidth":0,"ticks":""}}}],"histogram":[{"marker":{"pattern":{"fillmode":"overlay","size":10,"solidity":0.2}},"type":"histogram"}],"scattergl":[{"type":"scattergl","marker":{"colorbar":{"outlinewidth":0,"ticks":""}}}],"scatter3d":[{"type":"scatter3d","line":{"colorbar":{"outlinewidth":0,"ticks":""}},"marker":{"colorbar":{"outlinewidth":0,"ticks":""}}}],"scattermap":[{"type":"scattermap","marker":{"colorbar":{"outlinewidth":0,"ticks":""}}}],"scattermapbox":[{"type":"scattermapbox","marker":{"colorbar":{"outlinewidth":0,"ticks":""}}}],"scatterternary":[{"type":"scatterternary","marker":{"colorbar":{"outlinewidth":0,"ticks":""}}}],"scattercarpet":[{"type":"scattercarpet","marker":{"colorbar":{"outlinewidth":0,"ticks":""}}}],"carpet":[{"aaxis":{"endlinecolor":"#2a3f5f","gridcolor":"white","linecolor":"white","minorgridcolor":"white","startlinecolor":"#2a3f5f"},"baxis":{"endlinecolor":"#2a3f5f","gridcolor":"white","linecolor":"white","minorgridcolor":"white","startlinecolor":"#2a3f5f"},"type":"carpet"}],"table":[{"cells":{"fill":{"color":"#EBF0F8"},"line":{"color":"white"}},"header":{"fill":{"color":"#C8D4E3"},"line":{"color":"white"}},"type":"table"}],"barpolar":[{"marker":{"line":{"color":"#E5ECF6","width":0.5},"pattern":{"fillmode":"overlay","size":10,"solidity":0.2}},"type":"barpolar"}],"pie":[{"automargin":true,"type":"pie"}]},"layout":{"autotypenumbers":"strict","colorway":["#636efa","#EF553B","#00cc96","#ab63fa","#FFA15A","#19d3f3","#FF6692","#B6E880","#FF97FF","#FECB52"],"font":{"color":"#2a3f5f"},"hovermode":"closest","hoverlabel":{"align":"left"},"paper_bgcolor":"white","plot_bgcolor":"#E5ECF6","polar":{"bgcolor":"#E5ECF6","angularaxis":{"gridcolor":"white","linecolor":"white","ticks":""},"radialaxis":{"gridcolor":"white","linecolor":"white","ticks":""}},"ternary":{"bgcolor":"#E5ECF6","aaxis":{"gridcolor":"white","linecolor":"white","ticks":""},"baxis":{"gridcolor":"white","linecolor":"white","ticks":""},"caxis":{"gridcolor":"white","linecolor":"white","ticks":""}},"coloraxis":{"colorbar":{"outlinewidth":0,"ticks":""}},"colorscale":{"sequential":[[0.0,"#0d0887"],[0.1111111111111111,"#46039f"],[0.2222222222222222,"#7201a8"],[0.3333333333333333,"#9c179e"],[0.4444444444444444,"#bd3786"],[0.5555555555555556,"#d8576b"],[0.6666666666666666,"#ed7953"],[0.7777777777777778,"#fb9f3a"],[0.8888888888888888,"#fdca26"],[1.0,"#f0f921"]],"sequentialminus":[[0.0,"#0d0887"],[0.1111111111111111,"#46039f"],[0.2222222222222222,"#7201a8"],[0.3333333333333333,"#9c179e"],[0.4444444444444444,"#bd3786"],[0.5555555555555556,"#d8576b"],[0.6666666666666666,"#ed7953"],[0.7777777777777778,"#fb9f3a"],[0.8888888888888888,"#fdca26"],[1.0,"#f0f921"]],"diverging":[[0,"#8e0152"],[0.1,"#c51b7d"],[0.2,"#de77ae"],[0.3,"#f1b6da"],[0.4,"#fde0ef"],[0.5,"#f7f7f7"],[0.6,"#e6f5d0"],[0.7,"#b8e186"],[0.8,"#7fbc41"],[0.9,"#4d9221"],[1,"#276419"]]},"xaxis":{"gridcolor":"white","linecolor":"white","ticks":"","title":{"standoff":15},"zerolinecolor":"white","automargin":true,"zerolinewidth":2},"yaxis":{"gridcolor":"white","linecolor":"white","ticks":"","title":{"standoff":15},"zerolinecolor":"white","automargin":true,"zerolinewidth":2},"scene":{"xaxis":{"backgroundcolor":"#E5ECF6","gridcolor":"white","linecolor":"white","showbackground":true,"ticks":"","zerolinecolor":"white","gridwidth":2},"yaxis":{"backgroundcolor":"#E5ECF6","gridcolor":"white","linecolor":"white","showbackground":true,"ticks":"","zerolinecolor":"white","gridwidth":2},"zaxis":{"backgroundcolor":"#E5ECF6","gridcolor":"white","linecolor":"white","showbackground":true,"ticks":"","zerolinecolor":"white","gridwidth":2}},"shapedefaults":{"line":{"color":"#2a3f5f"}},"annotationdefaults":{"arrowcolor":"#2a3f5f","arrowhead":0,"arrowwidth":1},"geo":{"bgcolor":"white","landcolor":"#E5ECF6","subunitcolor":"white","showland":true,"showlakes":true,"lakecolor":"white"},"title":{"x":0.05},"mapbox":{"style":"light"}}},"shapes":[{"line":{"color":"gray","dash":"dash","width":1},"type":"line","x0":0,"x1":1,"xref":"x domain","y0":300000000,"y1":300000000,"yref":"y"},{"line":{"color":"gray","dash":"dash","width":1},"type":"line","x0":0,"x1":1,"xref":"x domain","y0":1000000000,"y1":1000000000,"yref":"y"}],"annotations":[{"font":{"color":"gray","size":12},"showarrow":false,"text":"Common Corpus (dan) (Langlais et al., 2025)","x":0,"xanchor":"left","xref":"x domain","y":300000000,"yanchor":"bottom","yref":"y"},{"font":{"color":"gray","size":12},"showarrow":false,"text":"Danish Gigaword (Derczynski et al., 2021)","x":0,"xanchor":"left","xref":"x domain","y":1000000000,"yanchor":"bottom","yref":"y"}],"title":{"text":"Number of Tokens Over Time in Danish Dynaword"},"xaxis":{"title":{"text":"Date"}},"yaxis":{"title":{"text":"Number of Tokens (Llama 3)"},"tickformat":".2s","ticksuffix":""},"hovermode":"closest","width":600,"height":400,"showlegend":false,"plot_bgcolor":"rgba(0,0,0,0)","paper_bgcolor":"rgba(0,0,0,0)"}, {"responsive": true} ) }; </script> </div>
6
  </body>
7
  </html>
images/tokens_over_time.svg CHANGED
pyproject.toml CHANGED
@@ -1,6 +1,6 @@
1
  [project]
2
  name = "dynaword"
3
- version = "1.2.5"
4
  description = "project code for the danish dynaword project"
5
  readme = "README.md"
6
  requires-python = ">=3.12,<3.13" # 3.13 have issues with spacy and pytorch
 
1
  [project]
2
  name = "dynaword"
3
+ version = "1.2.6"
4
  description = "project code for the danish dynaword project"
5
  readme = "README.md"
6
  requires-python = ">=3.12,<3.13" # 3.13 have issues with spacy and pytorch
src/dynaword/datasheet.py CHANGED
@@ -117,20 +117,6 @@ class DataSheet(BaseModel):
117
  def to_str(self) -> str:
118
  return f"---\n{self.frontmatter_as_str.strip()}\n---\n\n{self.body.strip()}\n"
119
 
120
- def get_feature_by_string(
121
- self, feature_name: Literal["Domain", "Language", "License"]
122
- ) -> str:
123
- """Get a specific feature from the frontmatter."""
124
- match feature_name:
125
- case "Domain":
126
- return self.domains[0] if self.domains else "N/A"
127
- case "Language":
128
- return ", ".join(self.language)
129
- case "License":
130
- return self.license
131
- case _:
132
- raise ValueError(f"Unknown feature: {feature_name}")
133
-
134
  def get_dataset(self, **kwargs) -> Dataset:
135
  ds_path = self.path.parent
136
  ds = load_dataset(ds_path.as_posix(), split="train", **kwargs)
 
117
  def to_str(self) -> str:
118
  return f"---\n{self.frontmatter_as_str.strip()}\n---\n\n{self.body.strip()}\n"
119
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
120
  def get_dataset(self, **kwargs) -> Dataset:
121
  ds_path = self.path.parent
122
  ds = load_dataset(ds_path.as_posix(), split="train", **kwargs)
src/dynaword/tables.py CHANGED
@@ -109,6 +109,42 @@ def create_overview_table(
109
  return df
110
 
111
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
112
  def create_grouped_table(
113
  group: Literal["Domain", "Language", "License"] = "Domain",
114
  repo_path: Path = repo_path,
@@ -127,7 +163,7 @@ def create_grouped_table(
127
 
128
  sheet = DataSheet.load_from_path(readme_path)
129
  desc_stats = sheet.get_descritive_stats()
130
- feature = sheet.get_feature_by_string(group)
131
 
132
  table["Sources"] += [f"[{dataset_path.name}]"]
133
  table[group] += [feature]
@@ -163,7 +199,7 @@ def create_grouped_table_str(
163
  ) -> str:
164
  table = create_grouped_table(group=group, repo_path=repo_path)
165
  readme_references = create_dataset_readme_references()
166
- package = f"{table.to_markdown(index=False, maxcolwidths=[None, 20, None])}\n\n{readme_references}\n\n"
167
  return package
168
 
169
 
 
109
  return df
110
 
111
 
112
+ def _get_normalized_license(ds: DataSheet) -> str:
113
+ non_standard_license_names = {
114
+ "Apache 2.0": "Other (Attribution required)",
115
+ "NLOD 2.0": "Other (Attribution required)",
116
+ "DanNet 1.0": "Other (Attribution required)",
117
+ "Gutenberg": "Other (Attribution required)",
118
+ "Danish Copyright Law": "Other (No attribution required)",
119
+ }
120
+ if (
121
+ ds.license_name not in non_standard_license_names
122
+ and ds.license_name is not None
123
+ ):
124
+ return ds.license_name
125
+ if ds.license_name is None:
126
+ raise ValueError(
127
+ f"Datasheet {ds.pretty_name} has no license name specified in the frontmatter."
128
+ )
129
+ return non_standard_license_names[ds.license_name]
130
+
131
+
132
+ def _get_feature_by_string(
133
+ datasheet: DataSheet, feature_name: Literal["Domain", "Language", "License"]
134
+ ) -> str:
135
+ """Get a specific feature from the frontmatter."""
136
+
137
+ match feature_name:
138
+ case "Domain":
139
+ return datasheet.domains[0] if datasheet.domains else "N/A"
140
+ case "Language":
141
+ return ", ".join(datasheet.language)
142
+ case "License":
143
+ return _get_normalized_license(datasheet)
144
+ case _:
145
+ raise ValueError(f"Unknown feature: {feature_name}")
146
+
147
+
148
  def create_grouped_table(
149
  group: Literal["Domain", "Language", "License"] = "Domain",
150
  repo_path: Path = repo_path,
 
163
 
164
  sheet = DataSheet.load_from_path(readme_path)
165
  desc_stats = sheet.get_descritive_stats()
166
+ feature = _get_feature_by_string(sheet, group)
167
 
168
  table["Sources"] += [f"[{dataset_path.name}]"]
169
  table[group] += [feature]
 
199
  ) -> str:
200
  table = create_grouped_table(group=group, repo_path=repo_path)
201
  readme_references = create_dataset_readme_references()
202
+ package = f"{table.to_markdown(index=False, maxcolwidths=[None, None, None])}\n\n{readme_references}\n\n"
203
  return package
204
 
205
 
test_results.log CHANGED
@@ -11,1408 +11,15 @@ src/tests/test_datasheets.py ........................................... [ 35%]
11
  ........................................................................ [ 57%]
12
  ................................................................. [ 76%]
13
  src/tests/test_load.py .. [ 77%]
14
- src/tests/test_quality/test_duplicates.py .............FF..F.F.......... [ 86%]
15
  ......s [ 88%]
16
- src/tests/test_quality/test_short_texts.py .............FF....F......... [ 97%]
17
  ....... [ 99%]
18
- src/tests/test_unique_ids.py F [100%]
19
 
20
- =================================== FAILURES ===================================
21
- ______________________ test_no_within_data_duplicates[ep] ______________________
22
-
23
- self = <datasets.packaged_modules.parquet.parquet.ParquetDanish-dynaword object at 0x118b3e240>
24
- gen_kwargs = {'files': tracked_list(current=FilesIterable(current=/Users/au561649/Github/danish-dynaword/data/ep/ep.parquet))}
25
- fpath = '/Users/au561649/.cache/huggingface/datasets/danish-dynaword/ep/0.0.0/5055500453bef830.incomplete/danish-dynaword-train-JJJJJ-SSSSS-of-NNNNN.arrow'
26
- file_format = 'arrow', max_shard_size = 500000000, job_id = 0
27
-
28
- def _prepare_split_single(
29
- self, gen_kwargs: dict, fpath: str, file_format: str, max_shard_size: int, job_id: int
30
- ) -> Iterable[Tuple[int, bool, Union[int, tuple]]]:
31
- gen_kwargs = {k: tracked_list(v) if isinstance(v, list) else v for k, v in gen_kwargs.items()}
32
- generator = self._generate_tables(**gen_kwargs)
33
- writer_class = ParquetWriter if file_format == "parquet" else ArrowWriter
34
- embed_local_files = file_format == "parquet"
35
- shard_lengths = []
36
- total_num_examples, total_num_bytes = 0, 0
37
-
38
- shard_id = 0
39
- num_examples_progress_update = 0
40
- try:
41
- writer = writer_class(
42
- features=self.info.features,
43
- path=fpath.replace("SSSSS", f"{shard_id:05d}").replace("JJJJJ", f"{job_id:05d}"),
44
- writer_batch_size=self._writer_batch_size,
45
- storage_options=self._fs.storage_options,
46
- embed_local_files=embed_local_files,
47
- )
48
- try:
49
- _time = time.time()
50
- for _, table in generator:
51
- if max_shard_size is not None and writer._num_bytes > max_shard_size:
52
- num_examples, num_bytes = writer.finalize()
53
- writer.close()
54
- shard_lengths.append(num_examples)
55
- total_num_examples += num_examples
56
- total_num_bytes += num_bytes
57
- shard_id += 1
58
- writer = writer_class(
59
- features=writer._features,
60
- path=fpath.replace("SSSSS", f"{shard_id:05d}").replace("JJJJJ", f"{job_id:05d}"),
61
- writer_batch_size=self._writer_batch_size,
62
- storage_options=self._fs.storage_options,
63
- embed_local_files=embed_local_files,
64
- )
65
- try:
66
- > writer.write_table(table)
67
-
68
- .venv/lib/python3.12/site-packages/datasets/builder.py:1870:
69
- _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
70
- .venv/lib/python3.12/site-packages/datasets/arrow_writer.py:627: in write_table
71
- self.pa_writer.write_table(pa_table, writer_batch_size)
72
- pyarrow/ipc.pxi:529: in pyarrow.lib._CRecordBatchWriter.write_table
73
- ???
74
- pyarrow/error.pxi:89: in pyarrow.lib.check_status
75
- ???
76
- _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
77
-
78
- self = <fsspec.implementations.local.LocalFileOpener object at 0x114a4bfa0>
79
- args = (<pyarrow.Buffer address=0x5ddec020000 size=75246719 is_cpu=True is_mutable=True>,)
80
- kwargs = {}
81
-
82
- def write(self, *args, **kwargs):
83
- > return self.f.write(*args, **kwargs)
84
- E OSError: [Errno 28] No space left on device
85
-
86
- .venv/lib/python3.12/site-packages/fsspec/implementations/local.py:426: OSError
87
-
88
- The above exception was the direct cause of the following exception:
89
-
90
- dataset_name = 'ep'
91
-
92
- @pytest.mark.parametrize("dataset_name", DATASET_NAMES)
93
- def test_no_within_data_duplicates(dataset_name: str):
94
- > ds = load_dataset(str(repo_path.resolve()), dataset_name, split="train")
95
-
96
- src/tests/test_quality/test_duplicates.py:12:
97
- _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
98
- .venv/lib/python3.12/site-packages/datasets/load.py:2151: in load_dataset
99
- builder_instance.download_and_prepare(
100
- .venv/lib/python3.12/site-packages/datasets/builder.py:924: in download_and_prepare
101
- self._download_and_prepare(
102
- .venv/lib/python3.12/site-packages/datasets/builder.py:1000: in _download_and_prepare
103
- self._prepare_split(split_generator, **prepare_split_kwargs)
104
- .venv/lib/python3.12/site-packages/datasets/builder.py:1741: in _prepare_split
105
- for job_id, done, content in self._prepare_split_single(
106
- _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
107
-
108
- self = <datasets.packaged_modules.parquet.parquet.ParquetDanish-dynaword object at 0x118b3e240>
109
- gen_kwargs = {'files': tracked_list(current=FilesIterable(current=/Users/au561649/Github/danish-dynaword/data/ep/ep.parquet))}
110
- fpath = '/Users/au561649/.cache/huggingface/datasets/danish-dynaword/ep/0.0.0/5055500453bef830.incomplete/danish-dynaword-train-JJJJJ-SSSSS-of-NNNNN.arrow'
111
- file_format = 'arrow', max_shard_size = 500000000, job_id = 0
112
-
113
- def _prepare_split_single(
114
- self, gen_kwargs: dict, fpath: str, file_format: str, max_shard_size: int, job_id: int
115
- ) -> Iterable[Tuple[int, bool, Union[int, tuple]]]:
116
- gen_kwargs = {k: tracked_list(v) if isinstance(v, list) else v for k, v in gen_kwargs.items()}
117
- generator = self._generate_tables(**gen_kwargs)
118
- writer_class = ParquetWriter if file_format == "parquet" else ArrowWriter
119
- embed_local_files = file_format == "parquet"
120
- shard_lengths = []
121
- total_num_examples, total_num_bytes = 0, 0
122
-
123
- shard_id = 0
124
- num_examples_progress_update = 0
125
- try:
126
- writer = writer_class(
127
- features=self.info.features,
128
- path=fpath.replace("SSSSS", f"{shard_id:05d}").replace("JJJJJ", f"{job_id:05d}"),
129
- writer_batch_size=self._writer_batch_size,
130
- storage_options=self._fs.storage_options,
131
- embed_local_files=embed_local_files,
132
- )
133
- try:
134
- _time = time.time()
135
- for _, table in generator:
136
- if max_shard_size is not None and writer._num_bytes > max_shard_size:
137
- num_examples, num_bytes = writer.finalize()
138
- writer.close()
139
- shard_lengths.append(num_examples)
140
- total_num_examples += num_examples
141
- total_num_bytes += num_bytes
142
- shard_id += 1
143
- writer = writer_class(
144
- features=writer._features,
145
- path=fpath.replace("SSSSS", f"{shard_id:05d}").replace("JJJJJ", f"{job_id:05d}"),
146
- writer_batch_size=self._writer_batch_size,
147
- storage_options=self._fs.storage_options,
148
- embed_local_files=embed_local_files,
149
- )
150
- try:
151
- writer.write_table(table)
152
- except CastError as cast_error:
153
- raise DatasetGenerationCastError.from_cast_error(
154
- cast_error=cast_error,
155
- builder_name=self.info.builder_name,
156
- gen_kwargs=gen_kwargs,
157
- token=self.token,
158
- )
159
- num_examples_progress_update += len(table)
160
- if time.time() > _time + config.PBAR_REFRESH_TIME_INTERVAL:
161
- _time = time.time()
162
- yield job_id, False, num_examples_progress_update
163
- num_examples_progress_update = 0
164
- finally:
165
- yield job_id, False, num_examples_progress_update
166
- num_shards = shard_id + 1
167
- num_examples, num_bytes = writer.finalize()
168
- writer.close()
169
- shard_lengths.append(num_examples)
170
- total_num_examples += num_examples
171
- total_num_bytes += num_bytes
172
- except Exception as e:
173
- # Ignore the writer's error for no examples written to the file if this error was caused by the error in _generate_examples before the first example was yielded
174
- if isinstance(e, SchemaInferenceError) and e.__context__ is not None:
175
- e = e.__context__
176
- if isinstance(e, DatasetGenerationError):
177
- raise
178
- > raise DatasetGenerationError("An error occurred while generating the dataset") from e
179
- E datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset
180
-
181
- .venv/lib/python3.12/site-packages/datasets/builder.py:1897: DatasetGenerationError
182
- ----------------------------- Captured stderr call -----------------------------
183
-
184
- ______________________ test_no_within_data_duplicates[ft] ______________________
185
-
186
- self = <datasets.packaged_modules.parquet.parquet.ParquetDanish-dynaword object at 0x11137ed80>
187
- gen_kwargs = {'files': tracked_list(current=FilesIterable(current=/Users/au561649/Github/danish-dynaword/data/ft/ft.parquet))}
188
- fpath = '/Users/au561649/.cache/huggingface/datasets/danish-dynaword/ft/0.0.0/5055500453bef830.incomplete/danish-dynaword-train-JJJJJ-SSSSS-of-NNNNN.arrow'
189
- file_format = 'arrow', max_shard_size = 500000000, job_id = 0
190
-
191
- def _prepare_split_single(
192
- self, gen_kwargs: dict, fpath: str, file_format: str, max_shard_size: int, job_id: int
193
- ) -> Iterable[Tuple[int, bool, Union[int, tuple]]]:
194
- gen_kwargs = {k: tracked_list(v) if isinstance(v, list) else v for k, v in gen_kwargs.items()}
195
- generator = self._generate_tables(**gen_kwargs)
196
- writer_class = ParquetWriter if file_format == "parquet" else ArrowWriter
197
- embed_local_files = file_format == "parquet"
198
- shard_lengths = []
199
- total_num_examples, total_num_bytes = 0, 0
200
-
201
- shard_id = 0
202
- num_examples_progress_update = 0
203
- try:
204
- writer = writer_class(
205
- features=self.info.features,
206
- path=fpath.replace("SSSSS", f"{shard_id:05d}").replace("JJJJJ", f"{job_id:05d}"),
207
- writer_batch_size=self._writer_batch_size,
208
- storage_options=self._fs.storage_options,
209
- embed_local_files=embed_local_files,
210
- )
211
- try:
212
- _time = time.time()
213
- for _, table in generator:
214
- if max_shard_size is not None and writer._num_bytes > max_shard_size:
215
- num_examples, num_bytes = writer.finalize()
216
- writer.close()
217
- shard_lengths.append(num_examples)
218
- total_num_examples += num_examples
219
- total_num_bytes += num_bytes
220
- shard_id += 1
221
- writer = writer_class(
222
- features=writer._features,
223
- path=fpath.replace("SSSSS", f"{shard_id:05d}").replace("JJJJJ", f"{job_id:05d}"),
224
- writer_batch_size=self._writer_batch_size,
225
- storage_options=self._fs.storage_options,
226
- embed_local_files=embed_local_files,
227
- )
228
- try:
229
- > writer.write_table(table)
230
-
231
- .venv/lib/python3.12/site-packages/datasets/builder.py:1870:
232
- _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
233
- .venv/lib/python3.12/site-packages/datasets/arrow_writer.py:627: in write_table
234
- self.pa_writer.write_table(pa_table, writer_batch_size)
235
- pyarrow/ipc.pxi:529: in pyarrow.lib._CRecordBatchWriter.write_table
236
- ???
237
- pyarrow/error.pxi:89: in pyarrow.lib.check_status
238
- ???
239
- _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
240
-
241
- self = <fsspec.implementations.local.LocalFileOpener object at 0x1137dd150>
242
- args = (<pyarrow.Buffer address=0x5de9c020000 size=274397630 is_cpu=True is_mutable=True>,)
243
- kwargs = {}
244
-
245
- def write(self, *args, **kwargs):
246
- > return self.f.write(*args, **kwargs)
247
- E OSError: [Errno 28] No space left on device
248
-
249
- .venv/lib/python3.12/site-packages/fsspec/implementations/local.py:426: OSError
250
-
251
- The above exception was the direct cause of the following exception:
252
-
253
- dataset_name = 'ft'
254
-
255
- @pytest.mark.parametrize("dataset_name", DATASET_NAMES)
256
- def test_no_within_data_duplicates(dataset_name: str):
257
- > ds = load_dataset(str(repo_path.resolve()), dataset_name, split="train")
258
-
259
- src/tests/test_quality/test_duplicates.py:12:
260
- _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
261
- .venv/lib/python3.12/site-packages/datasets/load.py:2151: in load_dataset
262
- builder_instance.download_and_prepare(
263
- .venv/lib/python3.12/site-packages/datasets/builder.py:924: in download_and_prepare
264
- self._download_and_prepare(
265
- .venv/lib/python3.12/site-packages/datasets/builder.py:1000: in _download_and_prepare
266
- self._prepare_split(split_generator, **prepare_split_kwargs)
267
- .venv/lib/python3.12/site-packages/datasets/builder.py:1741: in _prepare_split
268
- for job_id, done, content in self._prepare_split_single(
269
- _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
270
-
271
- self = <datasets.packaged_modules.parquet.parquet.ParquetDanish-dynaword object at 0x11137ed80>
272
- gen_kwargs = {'files': tracked_list(current=FilesIterable(current=/Users/au561649/Github/danish-dynaword/data/ft/ft.parquet))}
273
- fpath = '/Users/au561649/.cache/huggingface/datasets/danish-dynaword/ft/0.0.0/5055500453bef830.incomplete/danish-dynaword-train-JJJJJ-SSSSS-of-NNNNN.arrow'
274
- file_format = 'arrow', max_shard_size = 500000000, job_id = 0
275
-
276
- def _prepare_split_single(
277
- self, gen_kwargs: dict, fpath: str, file_format: str, max_shard_size: int, job_id: int
278
- ) -> Iterable[Tuple[int, bool, Union[int, tuple]]]:
279
- gen_kwargs = {k: tracked_list(v) if isinstance(v, list) else v for k, v in gen_kwargs.items()}
280
- generator = self._generate_tables(**gen_kwargs)
281
- writer_class = ParquetWriter if file_format == "parquet" else ArrowWriter
282
- embed_local_files = file_format == "parquet"
283
- shard_lengths = []
284
- total_num_examples, total_num_bytes = 0, 0
285
-
286
- shard_id = 0
287
- num_examples_progress_update = 0
288
- try:
289
- writer = writer_class(
290
- features=self.info.features,
291
- path=fpath.replace("SSSSS", f"{shard_id:05d}").replace("JJJJJ", f"{job_id:05d}"),
292
- writer_batch_size=self._writer_batch_size,
293
- storage_options=self._fs.storage_options,
294
- embed_local_files=embed_local_files,
295
- )
296
- try:
297
- _time = time.time()
298
- for _, table in generator:
299
- if max_shard_size is not None and writer._num_bytes > max_shard_size:
300
- num_examples, num_bytes = writer.finalize()
301
- writer.close()
302
- shard_lengths.append(num_examples)
303
- total_num_examples += num_examples
304
- total_num_bytes += num_bytes
305
- shard_id += 1
306
- writer = writer_class(
307
- features=writer._features,
308
- path=fpath.replace("SSSSS", f"{shard_id:05d}").replace("JJJJJ", f"{job_id:05d}"),
309
- writer_batch_size=self._writer_batch_size,
310
- storage_options=self._fs.storage_options,
311
- embed_local_files=embed_local_files,
312
- )
313
- try:
314
- writer.write_table(table)
315
- except CastError as cast_error:
316
- raise DatasetGenerationCastError.from_cast_error(
317
- cast_error=cast_error,
318
- builder_name=self.info.builder_name,
319
- gen_kwargs=gen_kwargs,
320
- token=self.token,
321
- )
322
- num_examples_progress_update += len(table)
323
- if time.time() > _time + config.PBAR_REFRESH_TIME_INTERVAL:
324
- _time = time.time()
325
- yield job_id, False, num_examples_progress_update
326
- num_examples_progress_update = 0
327
- finally:
328
- yield job_id, False, num_examples_progress_update
329
- num_shards = shard_id + 1
330
- num_examples, num_bytes = writer.finalize()
331
- writer.close()
332
- shard_lengths.append(num_examples)
333
- total_num_examples += num_examples
334
- total_num_bytes += num_bytes
335
- except Exception as e:
336
- # Ignore the writer's error for no examples written to the file if this error was caused by the error in _generate_examples before the first example was yielded
337
- if isinstance(e, SchemaInferenceError) and e.__context__ is not None:
338
- e = e.__context__
339
- if isinstance(e, DatasetGenerationError):
340
- raise
341
- > raise DatasetGenerationError("An error occurred while generating the dataset") from e
342
- E datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset
343
-
344
- .venv/lib/python3.12/site-packages/datasets/builder.py:1897: DatasetGenerationError
345
- ----------------------------- Captured stderr call -----------------------------
346
-
347
- _____________________ test_no_within_data_duplicates[tv2r] _____________________
348
-
349
- self = <datasets.packaged_modules.parquet.parquet.ParquetDanish-dynaword object at 0x114c07bc0>
350
- gen_kwargs = {'files': tracked_list(current=FilesIterable(current=/Users/au561649/Github/danish-dynaword/data/tv2r/tv2r.parquet))}
351
- fpath = '/Users/au561649/.cache/huggingface/datasets/danish-dynaword/tv2r/0.0.0/5055500453bef830.incomplete/danish-dynaword-train-JJJJJ-SSSSS-of-NNNNN.arrow'
352
- file_format = 'arrow', max_shard_size = 500000000, job_id = 0
353
-
354
- def _prepare_split_single(
355
- self, gen_kwargs: dict, fpath: str, file_format: str, max_shard_size: int, job_id: int
356
- ) -> Iterable[Tuple[int, bool, Union[int, tuple]]]:
357
- gen_kwargs = {k: tracked_list(v) if isinstance(v, list) else v for k, v in gen_kwargs.items()}
358
- generator = self._generate_tables(**gen_kwargs)
359
- writer_class = ParquetWriter if file_format == "parquet" else ArrowWriter
360
- embed_local_files = file_format == "parquet"
361
- shard_lengths = []
362
- total_num_examples, total_num_bytes = 0, 0
363
-
364
- shard_id = 0
365
- num_examples_progress_update = 0
366
- try:
367
- writer = writer_class(
368
- features=self.info.features,
369
- path=fpath.replace("SSSSS", f"{shard_id:05d}").replace("JJJJJ", f"{job_id:05d}"),
370
- writer_batch_size=self._writer_batch_size,
371
- storage_options=self._fs.storage_options,
372
- embed_local_files=embed_local_files,
373
- )
374
- try:
375
- _time = time.time()
376
- for _, table in generator:
377
- if max_shard_size is not None and writer._num_bytes > max_shard_size:
378
- num_examples, num_bytes = writer.finalize()
379
- writer.close()
380
- shard_lengths.append(num_examples)
381
- total_num_examples += num_examples
382
- total_num_bytes += num_bytes
383
- shard_id += 1
384
- writer = writer_class(
385
- features=writer._features,
386
- path=fpath.replace("SSSSS", f"{shard_id:05d}").replace("JJJJJ", f"{job_id:05d}"),
387
- writer_batch_size=self._writer_batch_size,
388
- storage_options=self._fs.storage_options,
389
- embed_local_files=embed_local_files,
390
- )
391
- try:
392
- > writer.write_table(table)
393
-
394
- .venv/lib/python3.12/site-packages/datasets/builder.py:1870:
395
- _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
396
- .venv/lib/python3.12/site-packages/datasets/arrow_writer.py:627: in write_table
397
- self.pa_writer.write_table(pa_table, writer_batch_size)
398
- pyarrow/ipc.pxi:529: in pyarrow.lib._CRecordBatchWriter.write_table
399
- ???
400
- pyarrow/error.pxi:89: in pyarrow.lib.check_status
401
- ???
402
- _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
403
-
404
- self = <fsspec.implementations.local.LocalFileOpener object at 0x11379d9f0>
405
- args = (<pyarrow.Buffer address=0x5cf2c0d0000 size=4000 is_cpu=True is_mutable=True>,)
406
- kwargs = {}
407
-
408
- def write(self, *args, **kwargs):
409
- > return self.f.write(*args, **kwargs)
410
- E OSError: [Errno 28] No space left on device
411
-
412
- .venv/lib/python3.12/site-packages/fsspec/implementations/local.py:426: OSError
413
-
414
- During handling of the above exception, another exception occurred:
415
-
416
- self = <datasets.packaged_modules.parquet.parquet.ParquetDanish-dynaword object at 0x114c07bc0>
417
- gen_kwargs = {'files': tracked_list(current=FilesIterable(current=/Users/au561649/Github/danish-dynaword/data/tv2r/tv2r.parquet))}
418
- fpath = '/Users/au561649/.cache/huggingface/datasets/danish-dynaword/tv2r/0.0.0/5055500453bef830.incomplete/danish-dynaword-train-JJJJJ-SSSSS-of-NNNNN.arrow'
419
- file_format = 'arrow', max_shard_size = 500000000, job_id = 0
420
-
421
- def _prepare_split_single(
422
- self, gen_kwargs: dict, fpath: str, file_format: str, max_shard_size: int, job_id: int
423
- ) -> Iterable[Tuple[int, bool, Union[int, tuple]]]:
424
- gen_kwargs = {k: tracked_list(v) if isinstance(v, list) else v for k, v in gen_kwargs.items()}
425
- generator = self._generate_tables(**gen_kwargs)
426
- writer_class = ParquetWriter if file_format == "parquet" else ArrowWriter
427
- embed_local_files = file_format == "parquet"
428
- shard_lengths = []
429
- total_num_examples, total_num_bytes = 0, 0
430
-
431
- shard_id = 0
432
- num_examples_progress_update = 0
433
- try:
434
- writer = writer_class(
435
- features=self.info.features,
436
- path=fpath.replace("SSSSS", f"{shard_id:05d}").replace("JJJJJ", f"{job_id:05d}"),
437
- writer_batch_size=self._writer_batch_size,
438
- storage_options=self._fs.storage_options,
439
- embed_local_files=embed_local_files,
440
- )
441
- try:
442
- _time = time.time()
443
- for _, table in generator:
444
- if max_shard_size is not None and writer._num_bytes > max_shard_size:
445
- num_examples, num_bytes = writer.finalize()
446
- writer.close()
447
- shard_lengths.append(num_examples)
448
- total_num_examples += num_examples
449
- total_num_bytes += num_bytes
450
- shard_id += 1
451
- writer = writer_class(
452
- features=writer._features,
453
- path=fpath.replace("SSSSS", f"{shard_id:05d}").replace("JJJJJ", f"{job_id:05d}"),
454
- writer_batch_size=self._writer_batch_size,
455
- storage_options=self._fs.storage_options,
456
- embed_local_files=embed_local_files,
457
- )
458
- try:
459
- writer.write_table(table)
460
- except CastError as cast_error:
461
- raise DatasetGenerationCastError.from_cast_error(
462
- cast_error=cast_error,
463
- builder_name=self.info.builder_name,
464
- gen_kwargs=gen_kwargs,
465
- token=self.token,
466
- )
467
- num_examples_progress_update += len(table)
468
- if time.time() > _time + config.PBAR_REFRESH_TIME_INTERVAL:
469
- _time = time.time()
470
- yield job_id, False, num_examples_progress_update
471
- num_examples_progress_update = 0
472
- finally:
473
- yield job_id, False, num_examples_progress_update
474
- num_shards = shard_id + 1
475
- > num_examples, num_bytes = writer.finalize()
476
-
477
- .venv/lib/python3.12/site-packages/datasets/builder.py:1886:
478
- _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
479
- .venv/lib/python3.12/site-packages/datasets/arrow_writer.py:644: in finalize
480
- self.stream.close()
481
- _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
482
-
483
- self = <fsspec.implementations.local.LocalFileOpener object at 0x11379d9f0>
484
-
485
- def close(self):
486
- > return self.f.close()
487
- E OSError: [Errno 28] No space left on device
488
-
489
- .venv/lib/python3.12/site-packages/fsspec/implementations/local.py:444: OSError
490
-
491
- The above exception was the direct cause of the following exception:
492
-
493
- dataset_name = 'tv2r'
494
-
495
- @pytest.mark.parametrize("dataset_name", DATASET_NAMES)
496
- def test_no_within_data_duplicates(dataset_name: str):
497
- > ds = load_dataset(str(repo_path.resolve()), dataset_name, split="train")
498
-
499
- src/tests/test_quality/test_duplicates.py:12:
500
- _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
501
- .venv/lib/python3.12/site-packages/datasets/load.py:2151: in load_dataset
502
- builder_instance.download_and_prepare(
503
- .venv/lib/python3.12/site-packages/datasets/builder.py:924: in download_and_prepare
504
- self._download_and_prepare(
505
- .venv/lib/python3.12/site-packages/datasets/builder.py:1000: in _download_and_prepare
506
- self._prepare_split(split_generator, **prepare_split_kwargs)
507
- .venv/lib/python3.12/site-packages/datasets/builder.py:1741: in _prepare_split
508
- for job_id, done, content in self._prepare_split_single(
509
- _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
510
-
511
- self = <datasets.packaged_modules.parquet.parquet.ParquetDanish-dynaword object at 0x114c07bc0>
512
- gen_kwargs = {'files': tracked_list(current=FilesIterable(current=/Users/au561649/Github/danish-dynaword/data/tv2r/tv2r.parquet))}
513
- fpath = '/Users/au561649/.cache/huggingface/datasets/danish-dynaword/tv2r/0.0.0/5055500453bef830.incomplete/danish-dynaword-train-JJJJJ-SSSSS-of-NNNNN.arrow'
514
- file_format = 'arrow', max_shard_size = 500000000, job_id = 0
515
-
516
- def _prepare_split_single(
517
- self, gen_kwargs: dict, fpath: str, file_format: str, max_shard_size: int, job_id: int
518
- ) -> Iterable[Tuple[int, bool, Union[int, tuple]]]:
519
- gen_kwargs = {k: tracked_list(v) if isinstance(v, list) else v for k, v in gen_kwargs.items()}
520
- generator = self._generate_tables(**gen_kwargs)
521
- writer_class = ParquetWriter if file_format == "parquet" else ArrowWriter
522
- embed_local_files = file_format == "parquet"
523
- shard_lengths = []
524
- total_num_examples, total_num_bytes = 0, 0
525
-
526
- shard_id = 0
527
- num_examples_progress_update = 0
528
- try:
529
- writer = writer_class(
530
- features=self.info.features,
531
- path=fpath.replace("SSSSS", f"{shard_id:05d}").replace("JJJJJ", f"{job_id:05d}"),
532
- writer_batch_size=self._writer_batch_size,
533
- storage_options=self._fs.storage_options,
534
- embed_local_files=embed_local_files,
535
- )
536
- try:
537
- _time = time.time()
538
- for _, table in generator:
539
- if max_shard_size is not None and writer._num_bytes > max_shard_size:
540
- num_examples, num_bytes = writer.finalize()
541
- writer.close()
542
- shard_lengths.append(num_examples)
543
- total_num_examples += num_examples
544
- total_num_bytes += num_bytes
545
- shard_id += 1
546
- writer = writer_class(
547
- features=writer._features,
548
- path=fpath.replace("SSSSS", f"{shard_id:05d}").replace("JJJJJ", f"{job_id:05d}"),
549
- writer_batch_size=self._writer_batch_size,
550
- storage_options=self._fs.storage_options,
551
- embed_local_files=embed_local_files,
552
- )
553
- try:
554
- writer.write_table(table)
555
- except CastError as cast_error:
556
- raise DatasetGenerationCastError.from_cast_error(
557
- cast_error=cast_error,
558
- builder_name=self.info.builder_name,
559
- gen_kwargs=gen_kwargs,
560
- token=self.token,
561
- )
562
- num_examples_progress_update += len(table)
563
- if time.time() > _time + config.PBAR_REFRESH_TIME_INTERVAL:
564
- _time = time.time()
565
- yield job_id, False, num_examples_progress_update
566
- num_examples_progress_update = 0
567
- finally:
568
- yield job_id, False, num_examples_progress_update
569
- num_shards = shard_id + 1
570
- num_examples, num_bytes = writer.finalize()
571
- writer.close()
572
- shard_lengths.append(num_examples)
573
- total_num_examples += num_examples
574
- total_num_bytes += num_bytes
575
- except Exception as e:
576
- # Ignore the writer's error for no examples written to the file if this error was caused by the error in _generate_examples before the first example was yielded
577
- if isinstance(e, SchemaInferenceError) and e.__context__ is not None:
578
- e = e.__context__
579
- if isinstance(e, DatasetGenerationError):
580
- raise
581
- > raise DatasetGenerationError("An error occurred while generating the dataset") from e
582
- E datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset
583
-
584
- .venv/lib/python3.12/site-packages/datasets/builder.py:1897: DatasetGenerationError
585
- ----------------------------- Captured stderr call -----------------------------
586
-
587
- _____________________ test_no_within_data_duplicates[hest] _____________________
588
-
589
- self = <datasets.packaged_modules.parquet.parquet.ParquetDanish-dynaword object at 0x1137b2360>
590
- gen_kwargs = {'files': tracked_list(current=FilesIterable(current=/Users/au561649/Github/danish-dynaword/data/hest/hest.parquet))}
591
- fpath = '/Users/au561649/.cache/huggingface/datasets/danish-dynaword/hest/0.0.0/5055500453bef830.incomplete/danish-dynaword-train-JJJJJ-SSSSS-of-NNNNN.arrow'
592
- file_format = 'arrow', max_shard_size = 500000000, job_id = 0
593
-
594
- def _prepare_split_single(
595
- self, gen_kwargs: dict, fpath: str, file_format: str, max_shard_size: int, job_id: int
596
- ) -> Iterable[Tuple[int, bool, Union[int, tuple]]]:
597
- gen_kwargs = {k: tracked_list(v) if isinstance(v, list) else v for k, v in gen_kwargs.items()}
598
- generator = self._generate_tables(**gen_kwargs)
599
- writer_class = ParquetWriter if file_format == "parquet" else ArrowWriter
600
- embed_local_files = file_format == "parquet"
601
- shard_lengths = []
602
- total_num_examples, total_num_bytes = 0, 0
603
-
604
- shard_id = 0
605
- num_examples_progress_update = 0
606
- try:
607
- writer = writer_class(
608
- features=self.info.features,
609
- path=fpath.replace("SSSSS", f"{shard_id:05d}").replace("JJJJJ", f"{job_id:05d}"),
610
- writer_batch_size=self._writer_batch_size,
611
- storage_options=self._fs.storage_options,
612
- embed_local_files=embed_local_files,
613
- )
614
- try:
615
- _time = time.time()
616
- for _, table in generator:
617
- if max_shard_size is not None and writer._num_bytes > max_shard_size:
618
- num_examples, num_bytes = writer.finalize()
619
- writer.close()
620
- shard_lengths.append(num_examples)
621
- total_num_examples += num_examples
622
- total_num_bytes += num_bytes
623
- shard_id += 1
624
- writer = writer_class(
625
- features=writer._features,
626
- path=fpath.replace("SSSSS", f"{shard_id:05d}").replace("JJJJJ", f"{job_id:05d}"),
627
- writer_batch_size=self._writer_batch_size,
628
- storage_options=self._fs.storage_options,
629
- embed_local_files=embed_local_files,
630
- )
631
- try:
632
- > writer.write_table(table)
633
-
634
- .venv/lib/python3.12/site-packages/datasets/builder.py:1870:
635
- _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
636
- .venv/lib/python3.12/site-packages/datasets/arrow_writer.py:627: in write_table
637
- self.pa_writer.write_table(pa_table, writer_batch_size)
638
- pyarrow/ipc.pxi:529: in pyarrow.lib._CRecordBatchWriter.write_table
639
- ???
640
- pyarrow/error.pxi:89: in pyarrow.lib.check_status
641
- ???
642
- _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
643
-
644
- self = <fsspec.implementations.local.LocalFileOpener object at 0x114af1390>
645
- args = (<pyarrow.Buffer address=0x5e004020000 size=147880457 is_cpu=True is_mutable=True>,)
646
- kwargs = {}
647
-
648
- def write(self, *args, **kwargs):
649
- > return self.f.write(*args, **kwargs)
650
- E OSError: [Errno 28] No space left on device
651
-
652
- .venv/lib/python3.12/site-packages/fsspec/implementations/local.py:426: OSError
653
-
654
- The above exception was the direct cause of the following exception:
655
-
656
- dataset_name = 'hest'
657
-
658
- @pytest.mark.parametrize("dataset_name", DATASET_NAMES)
659
- def test_no_within_data_duplicates(dataset_name: str):
660
- > ds = load_dataset(str(repo_path.resolve()), dataset_name, split="train")
661
-
662
- src/tests/test_quality/test_duplicates.py:12:
663
- _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
664
- .venv/lib/python3.12/site-packages/datasets/load.py:2151: in load_dataset
665
- builder_instance.download_and_prepare(
666
- .venv/lib/python3.12/site-packages/datasets/builder.py:924: in download_and_prepare
667
- self._download_and_prepare(
668
- .venv/lib/python3.12/site-packages/datasets/builder.py:1000: in _download_and_prepare
669
- self._prepare_split(split_generator, **prepare_split_kwargs)
670
- .venv/lib/python3.12/site-packages/datasets/builder.py:1741: in _prepare_split
671
- for job_id, done, content in self._prepare_split_single(
672
- _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
673
-
674
- self = <datasets.packaged_modules.parquet.parquet.ParquetDanish-dynaword object at 0x1137b2360>
675
- gen_kwargs = {'files': tracked_list(current=FilesIterable(current=/Users/au561649/Github/danish-dynaword/data/hest/hest.parquet))}
676
- fpath = '/Users/au561649/.cache/huggingface/datasets/danish-dynaword/hest/0.0.0/5055500453bef830.incomplete/danish-dynaword-train-JJJJJ-SSSSS-of-NNNNN.arrow'
677
- file_format = 'arrow', max_shard_size = 500000000, job_id = 0
678
-
679
- def _prepare_split_single(
680
- self, gen_kwargs: dict, fpath: str, file_format: str, max_shard_size: int, job_id: int
681
- ) -> Iterable[Tuple[int, bool, Union[int, tuple]]]:
682
- gen_kwargs = {k: tracked_list(v) if isinstance(v, list) else v for k, v in gen_kwargs.items()}
683
- generator = self._generate_tables(**gen_kwargs)
684
- writer_class = ParquetWriter if file_format == "parquet" else ArrowWriter
685
- embed_local_files = file_format == "parquet"
686
- shard_lengths = []
687
- total_num_examples, total_num_bytes = 0, 0
688
-
689
- shard_id = 0
690
- num_examples_progress_update = 0
691
- try:
692
- writer = writer_class(
693
- features=self.info.features,
694
- path=fpath.replace("SSSSS", f"{shard_id:05d}").replace("JJJJJ", f"{job_id:05d}"),
695
- writer_batch_size=self._writer_batch_size,
696
- storage_options=self._fs.storage_options,
697
- embed_local_files=embed_local_files,
698
- )
699
- try:
700
- _time = time.time()
701
- for _, table in generator:
702
- if max_shard_size is not None and writer._num_bytes > max_shard_size:
703
- num_examples, num_bytes = writer.finalize()
704
- writer.close()
705
- shard_lengths.append(num_examples)
706
- total_num_examples += num_examples
707
- total_num_bytes += num_bytes
708
- shard_id += 1
709
- writer = writer_class(
710
- features=writer._features,
711
- path=fpath.replace("SSSSS", f"{shard_id:05d}").replace("JJJJJ", f"{job_id:05d}"),
712
- writer_batch_size=self._writer_batch_size,
713
- storage_options=self._fs.storage_options,
714
- embed_local_files=embed_local_files,
715
- )
716
- try:
717
- writer.write_table(table)
718
- except CastError as cast_error:
719
- raise DatasetGenerationCastError.from_cast_error(
720
- cast_error=cast_error,
721
- builder_name=self.info.builder_name,
722
- gen_kwargs=gen_kwargs,
723
- token=self.token,
724
- )
725
- num_examples_progress_update += len(table)
726
- if time.time() > _time + config.PBAR_REFRESH_TIME_INTERVAL:
727
- _time = time.time()
728
- yield job_id, False, num_examples_progress_update
729
- num_examples_progress_update = 0
730
- finally:
731
- yield job_id, False, num_examples_progress_update
732
- num_shards = shard_id + 1
733
- num_examples, num_bytes = writer.finalize()
734
- writer.close()
735
- shard_lengths.append(num_examples)
736
- total_num_examples += num_examples
737
- total_num_bytes += num_bytes
738
- except Exception as e:
739
- # Ignore the writer's error for no examples written to the file if this error was caused by the error in _generate_examples before the first example was yielded
740
- if isinstance(e, SchemaInferenceError) and e.__context__ is not None:
741
- e = e.__context__
742
- if isinstance(e, DatasetGenerationError):
743
- raise
744
- > raise DatasetGenerationError("An error occurred while generating the dataset") from e
745
- E datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset
746
-
747
- .venv/lib/python3.12/site-packages/datasets/builder.py:1897: DatasetGenerationError
748
- ----------------------------- Captured stderr call -----------------------------
749
-
750
- ________________________ test_no_one_word_documents[ep] ________________________
751
-
752
- self = <datasets.packaged_modules.parquet.parquet.ParquetDanish-dynaword object at 0x114c1bb90>
753
- gen_kwargs = {'files': tracked_list(current=FilesIterable(current=/Users/au561649/Github/danish-dynaword/data/ep/ep.parquet))}
754
- fpath = '/Users/au561649/.cache/huggingface/datasets/danish-dynaword/ep/0.0.0/5055500453bef830.incomplete/danish-dynaword-train-JJJJJ-SSSSS-of-NNNNN.arrow'
755
- file_format = 'arrow', max_shard_size = 500000000, job_id = 0
756
-
757
- def _prepare_split_single(
758
- self, gen_kwargs: dict, fpath: str, file_format: str, max_shard_size: int, job_id: int
759
- ) -> Iterable[Tuple[int, bool, Union[int, tuple]]]:
760
- gen_kwargs = {k: tracked_list(v) if isinstance(v, list) else v for k, v in gen_kwargs.items()}
761
- generator = self._generate_tables(**gen_kwargs)
762
- writer_class = ParquetWriter if file_format == "parquet" else ArrowWriter
763
- embed_local_files = file_format == "parquet"
764
- shard_lengths = []
765
- total_num_examples, total_num_bytes = 0, 0
766
-
767
- shard_id = 0
768
- num_examples_progress_update = 0
769
- try:
770
- writer = writer_class(
771
- features=self.info.features,
772
- path=fpath.replace("SSSSS", f"{shard_id:05d}").replace("JJJJJ", f"{job_id:05d}"),
773
- writer_batch_size=self._writer_batch_size,
774
- storage_options=self._fs.storage_options,
775
- embed_local_files=embed_local_files,
776
- )
777
- try:
778
- _time = time.time()
779
- for _, table in generator:
780
- if max_shard_size is not None and writer._num_bytes > max_shard_size:
781
- num_examples, num_bytes = writer.finalize()
782
- writer.close()
783
- shard_lengths.append(num_examples)
784
- total_num_examples += num_examples
785
- total_num_bytes += num_bytes
786
- shard_id += 1
787
- writer = writer_class(
788
- features=writer._features,
789
- path=fpath.replace("SSSSS", f"{shard_id:05d}").replace("JJJJJ", f"{job_id:05d}"),
790
- writer_batch_size=self._writer_batch_size,
791
- storage_options=self._fs.storage_options,
792
- embed_local_files=embed_local_files,
793
- )
794
- try:
795
- > writer.write_table(table)
796
-
797
- .venv/lib/python3.12/site-packages/datasets/builder.py:1870:
798
- _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
799
- .venv/lib/python3.12/site-packages/datasets/arrow_writer.py:627: in write_table
800
- self.pa_writer.write_table(pa_table, writer_batch_size)
801
- pyarrow/ipc.pxi:529: in pyarrow.lib._CRecordBatchWriter.write_table
802
- ???
803
- pyarrow/error.pxi:89: in pyarrow.lib.check_status
804
- ???
805
- _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
806
-
807
- self = <fsspec.implementations.local.LocalFileOpener object at 0x113e86290>
808
- args = (<pyarrow.Buffer address=0x5e1f0020000 size=76944794 is_cpu=True is_mutable=True>,)
809
- kwargs = {}
810
-
811
- def write(self, *args, **kwargs):
812
- > return self.f.write(*args, **kwargs)
813
- E OSError: [Errno 28] No space left on device
814
-
815
- .venv/lib/python3.12/site-packages/fsspec/implementations/local.py:426: OSError
816
-
817
- The above exception was the direct cause of the following exception:
818
-
819
- dataset_name = 'ep'
820
-
821
- @pytest.mark.parametrize("dataset_name", DATASET_NAMES)
822
- # @pytest.mark.skip("This tests currently fails")
823
- def test_no_one_word_documents(dataset_name: str):
824
- > ds = load_dataset(str(repo_path.resolve()), dataset_name, split="train")
825
-
826
- src/tests/test_quality/test_short_texts.py:14:
827
- _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
828
- .venv/lib/python3.12/site-packages/datasets/load.py:2151: in load_dataset
829
- builder_instance.download_and_prepare(
830
- .venv/lib/python3.12/site-packages/datasets/builder.py:924: in download_and_prepare
831
- self._download_and_prepare(
832
- .venv/lib/python3.12/site-packages/datasets/builder.py:1000: in _download_and_prepare
833
- self._prepare_split(split_generator, **prepare_split_kwargs)
834
- .venv/lib/python3.12/site-packages/datasets/builder.py:1741: in _prepare_split
835
- for job_id, done, content in self._prepare_split_single(
836
- _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
837
-
838
- self = <datasets.packaged_modules.parquet.parquet.ParquetDanish-dynaword object at 0x114c1bb90>
839
- gen_kwargs = {'files': tracked_list(current=FilesIterable(current=/Users/au561649/Github/danish-dynaword/data/ep/ep.parquet))}
840
- fpath = '/Users/au561649/.cache/huggingface/datasets/danish-dynaword/ep/0.0.0/5055500453bef830.incomplete/danish-dynaword-train-JJJJJ-SSSSS-of-NNNNN.arrow'
841
- file_format = 'arrow', max_shard_size = 500000000, job_id = 0
842
-
843
- def _prepare_split_single(
844
- self, gen_kwargs: dict, fpath: str, file_format: str, max_shard_size: int, job_id: int
845
- ) -> Iterable[Tuple[int, bool, Union[int, tuple]]]:
846
- gen_kwargs = {k: tracked_list(v) if isinstance(v, list) else v for k, v in gen_kwargs.items()}
847
- generator = self._generate_tables(**gen_kwargs)
848
- writer_class = ParquetWriter if file_format == "parquet" else ArrowWriter
849
- embed_local_files = file_format == "parquet"
850
- shard_lengths = []
851
- total_num_examples, total_num_bytes = 0, 0
852
-
853
- shard_id = 0
854
- num_examples_progress_update = 0
855
- try:
856
- writer = writer_class(
857
- features=self.info.features,
858
- path=fpath.replace("SSSSS", f"{shard_id:05d}").replace("JJJJJ", f"{job_id:05d}"),
859
- writer_batch_size=self._writer_batch_size,
860
- storage_options=self._fs.storage_options,
861
- embed_local_files=embed_local_files,
862
- )
863
- try:
864
- _time = time.time()
865
- for _, table in generator:
866
- if max_shard_size is not None and writer._num_bytes > max_shard_size:
867
- num_examples, num_bytes = writer.finalize()
868
- writer.close()
869
- shard_lengths.append(num_examples)
870
- total_num_examples += num_examples
871
- total_num_bytes += num_bytes
872
- shard_id += 1
873
- writer = writer_class(
874
- features=writer._features,
875
- path=fpath.replace("SSSSS", f"{shard_id:05d}").replace("JJJJJ", f"{job_id:05d}"),
876
- writer_batch_size=self._writer_batch_size,
877
- storage_options=self._fs.storage_options,
878
- embed_local_files=embed_local_files,
879
- )
880
- try:
881
- writer.write_table(table)
882
- except CastError as cast_error:
883
- raise DatasetGenerationCastError.from_cast_error(
884
- cast_error=cast_error,
885
- builder_name=self.info.builder_name,
886
- gen_kwargs=gen_kwargs,
887
- token=self.token,
888
- )
889
- num_examples_progress_update += len(table)
890
- if time.time() > _time + config.PBAR_REFRESH_TIME_INTERVAL:
891
- _time = time.time()
892
- yield job_id, False, num_examples_progress_update
893
- num_examples_progress_update = 0
894
- finally:
895
- yield job_id, False, num_examples_progress_update
896
- num_shards = shard_id + 1
897
- num_examples, num_bytes = writer.finalize()
898
- writer.close()
899
- shard_lengths.append(num_examples)
900
- total_num_examples += num_examples
901
- total_num_bytes += num_bytes
902
- except Exception as e:
903
- # Ignore the writer's error for no examples written to the file if this error was caused by the error in _generate_examples before the first example was yielded
904
- if isinstance(e, SchemaInferenceError) and e.__context__ is not None:
905
- e = e.__context__
906
- if isinstance(e, DatasetGenerationError):
907
- raise
908
- > raise DatasetGenerationError("An error occurred while generating the dataset") from e
909
- E datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset
910
-
911
- .venv/lib/python3.12/site-packages/datasets/builder.py:1897: DatasetGenerationError
912
- ----------------------------- Captured stderr call -----------------------------
913
-
914
- ________________________ test_no_one_word_documents[ft] ________________________
915
-
916
- self = <datasets.packaged_modules.parquet.parquet.ParquetDanish-dynaword object at 0x12e558620>
917
- gen_kwargs = {'files': tracked_list(current=FilesIterable(current=/Users/au561649/Github/danish-dynaword/data/ft/ft.parquet))}
918
- fpath = '/Users/au561649/.cache/huggingface/datasets/danish-dynaword/ft/0.0.0/5055500453bef830.incomplete/danish-dynaword-train-JJJJJ-SSSSS-of-NNNNN.arrow'
919
- file_format = 'arrow', max_shard_size = 500000000, job_id = 0
920
-
921
- def _prepare_split_single(
922
- self, gen_kwargs: dict, fpath: str, file_format: str, max_shard_size: int, job_id: int
923
- ) -> Iterable[Tuple[int, bool, Union[int, tuple]]]:
924
- gen_kwargs = {k: tracked_list(v) if isinstance(v, list) else v for k, v in gen_kwargs.items()}
925
- generator = self._generate_tables(**gen_kwargs)
926
- writer_class = ParquetWriter if file_format == "parquet" else ArrowWriter
927
- embed_local_files = file_format == "parquet"
928
- shard_lengths = []
929
- total_num_examples, total_num_bytes = 0, 0
930
-
931
- shard_id = 0
932
- num_examples_progress_update = 0
933
- try:
934
- writer = writer_class(
935
- features=self.info.features,
936
- path=fpath.replace("SSSSS", f"{shard_id:05d}").replace("JJJJJ", f"{job_id:05d}"),
937
- writer_batch_size=self._writer_batch_size,
938
- storage_options=self._fs.storage_options,
939
- embed_local_files=embed_local_files,
940
- )
941
- try:
942
- _time = time.time()
943
- for _, table in generator:
944
- if max_shard_size is not None and writer._num_bytes > max_shard_size:
945
- num_examples, num_bytes = writer.finalize()
946
- writer.close()
947
- shard_lengths.append(num_examples)
948
- total_num_examples += num_examples
949
- total_num_bytes += num_bytes
950
- shard_id += 1
951
- writer = writer_class(
952
- features=writer._features,
953
- path=fpath.replace("SSSSS", f"{shard_id:05d}").replace("JJJJJ", f"{job_id:05d}"),
954
- writer_batch_size=self._writer_batch_size,
955
- storage_options=self._fs.storage_options,
956
- embed_local_files=embed_local_files,
957
- )
958
- try:
959
- > writer.write_table(table)
960
-
961
- .venv/lib/python3.12/site-packages/datasets/builder.py:1870:
962
- _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
963
- .venv/lib/python3.12/site-packages/datasets/arrow_writer.py:627: in write_table
964
- self.pa_writer.write_table(pa_table, writer_batch_size)
965
- pyarrow/ipc.pxi:529: in pyarrow.lib._CRecordBatchWriter.write_table
966
- ???
967
- pyarrow/error.pxi:89: in pyarrow.lib.check_status
968
- ???
969
- _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
970
-
971
- self = <fsspec.implementations.local.LocalFileOpener object at 0x113eb1d50>
972
- args = (<pyarrow.Buffer address=0x5e238020000 size=274397630 is_cpu=True is_mutable=True>,)
973
- kwargs = {}
974
-
975
- def write(self, *args, **kwargs):
976
- > return self.f.write(*args, **kwargs)
977
- E OSError: [Errno 28] No space left on device
978
-
979
- .venv/lib/python3.12/site-packages/fsspec/implementations/local.py:426: OSError
980
-
981
- The above exception was the direct cause of the following exception:
982
-
983
- dataset_name = 'ft'
984
-
985
- @pytest.mark.parametrize("dataset_name", DATASET_NAMES)
986
- # @pytest.mark.skip("This tests currently fails")
987
- def test_no_one_word_documents(dataset_name: str):
988
- > ds = load_dataset(str(repo_path.resolve()), dataset_name, split="train")
989
-
990
- src/tests/test_quality/test_short_texts.py:14:
991
- _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
992
- .venv/lib/python3.12/site-packages/datasets/load.py:2151: in load_dataset
993
- builder_instance.download_and_prepare(
994
- .venv/lib/python3.12/site-packages/datasets/builder.py:924: in download_and_prepare
995
- self._download_and_prepare(
996
- .venv/lib/python3.12/site-packages/datasets/builder.py:1000: in _download_and_prepare
997
- self._prepare_split(split_generator, **prepare_split_kwargs)
998
- .venv/lib/python3.12/site-packages/datasets/builder.py:1741: in _prepare_split
999
- for job_id, done, content in self._prepare_split_single(
1000
- _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
1001
-
1002
- self = <datasets.packaged_modules.parquet.parquet.ParquetDanish-dynaword object at 0x12e558620>
1003
- gen_kwargs = {'files': tracked_list(current=FilesIterable(current=/Users/au561649/Github/danish-dynaword/data/ft/ft.parquet))}
1004
- fpath = '/Users/au561649/.cache/huggingface/datasets/danish-dynaword/ft/0.0.0/5055500453bef830.incomplete/danish-dynaword-train-JJJJJ-SSSSS-of-NNNNN.arrow'
1005
- file_format = 'arrow', max_shard_size = 500000000, job_id = 0
1006
-
1007
- def _prepare_split_single(
1008
- self, gen_kwargs: dict, fpath: str, file_format: str, max_shard_size: int, job_id: int
1009
- ) -> Iterable[Tuple[int, bool, Union[int, tuple]]]:
1010
- gen_kwargs = {k: tracked_list(v) if isinstance(v, list) else v for k, v in gen_kwargs.items()}
1011
- generator = self._generate_tables(**gen_kwargs)
1012
- writer_class = ParquetWriter if file_format == "parquet" else ArrowWriter
1013
- embed_local_files = file_format == "parquet"
1014
- shard_lengths = []
1015
- total_num_examples, total_num_bytes = 0, 0
1016
-
1017
- shard_id = 0
1018
- num_examples_progress_update = 0
1019
- try:
1020
- writer = writer_class(
1021
- features=self.info.features,
1022
- path=fpath.replace("SSSSS", f"{shard_id:05d}").replace("JJJJJ", f"{job_id:05d}"),
1023
- writer_batch_size=self._writer_batch_size,
1024
- storage_options=self._fs.storage_options,
1025
- embed_local_files=embed_local_files,
1026
- )
1027
- try:
1028
- _time = time.time()
1029
- for _, table in generator:
1030
- if max_shard_size is not None and writer._num_bytes > max_shard_size:
1031
- num_examples, num_bytes = writer.finalize()
1032
- writer.close()
1033
- shard_lengths.append(num_examples)
1034
- total_num_examples += num_examples
1035
- total_num_bytes += num_bytes
1036
- shard_id += 1
1037
- writer = writer_class(
1038
- features=writer._features,
1039
- path=fpath.replace("SSSSS", f"{shard_id:05d}").replace("JJJJJ", f"{job_id:05d}"),
1040
- writer_batch_size=self._writer_batch_size,
1041
- storage_options=self._fs.storage_options,
1042
- embed_local_files=embed_local_files,
1043
- )
1044
- try:
1045
- writer.write_table(table)
1046
- except CastError as cast_error:
1047
- raise DatasetGenerationCastError.from_cast_error(
1048
- cast_error=cast_error,
1049
- builder_name=self.info.builder_name,
1050
- gen_kwargs=gen_kwargs,
1051
- token=self.token,
1052
- )
1053
- num_examples_progress_update += len(table)
1054
- if time.time() > _time + config.PBAR_REFRESH_TIME_INTERVAL:
1055
- _time = time.time()
1056
- yield job_id, False, num_examples_progress_update
1057
- num_examples_progress_update = 0
1058
- finally:
1059
- yield job_id, False, num_examples_progress_update
1060
- num_shards = shard_id + 1
1061
- num_examples, num_bytes = writer.finalize()
1062
- writer.close()
1063
- shard_lengths.append(num_examples)
1064
- total_num_examples += num_examples
1065
- total_num_bytes += num_bytes
1066
- except Exception as e:
1067
- # Ignore the writer's error for no examples written to the file if this error was caused by the error in _generate_examples before the first example was yielded
1068
- if isinstance(e, SchemaInferenceError) and e.__context__ is not None:
1069
- e = e.__context__
1070
- if isinstance(e, DatasetGenerationError):
1071
- raise
1072
- > raise DatasetGenerationError("An error occurred while generating the dataset") from e
1073
- E datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset
1074
-
1075
- .venv/lib/python3.12/site-packages/datasets/builder.py:1897: DatasetGenerationError
1076
- ----------------------------- Captured stderr call -----------------------------
1077
-
1078
- _______________________ test_no_one_word_documents[hest] _______________________
1079
-
1080
- self = <datasets.packaged_modules.parquet.parquet.ParquetDanish-dynaword object at 0x118b3f1a0>
1081
- gen_kwargs = {'files': tracked_list(current=FilesIterable(current=/Users/au561649/Github/danish-dynaword/data/hest/hest.parquet))}
1082
- fpath = '/Users/au561649/.cache/huggingface/datasets/danish-dynaword/hest/0.0.0/5055500453bef830.incomplete/danish-dynaword-train-JJJJJ-SSSSS-of-NNNNN.arrow'
1083
- file_format = 'arrow', max_shard_size = 500000000, job_id = 0
1084
-
1085
- def _prepare_split_single(
1086
- self, gen_kwargs: dict, fpath: str, file_format: str, max_shard_size: int, job_id: int
1087
- ) -> Iterable[Tuple[int, bool, Union[int, tuple]]]:
1088
- gen_kwargs = {k: tracked_list(v) if isinstance(v, list) else v for k, v in gen_kwargs.items()}
1089
- generator = self._generate_tables(**gen_kwargs)
1090
- writer_class = ParquetWriter if file_format == "parquet" else ArrowWriter
1091
- embed_local_files = file_format == "parquet"
1092
- shard_lengths = []
1093
- total_num_examples, total_num_bytes = 0, 0
1094
-
1095
- shard_id = 0
1096
- num_examples_progress_update = 0
1097
- try:
1098
- writer = writer_class(
1099
- features=self.info.features,
1100
- path=fpath.replace("SSSSS", f"{shard_id:05d}").replace("JJJJJ", f"{job_id:05d}"),
1101
- writer_batch_size=self._writer_batch_size,
1102
- storage_options=self._fs.storage_options,
1103
- embed_local_files=embed_local_files,
1104
- )
1105
- try:
1106
- _time = time.time()
1107
- for _, table in generator:
1108
- if max_shard_size is not None and writer._num_bytes > max_shard_size:
1109
- num_examples, num_bytes = writer.finalize()
1110
- writer.close()
1111
- shard_lengths.append(num_examples)
1112
- total_num_examples += num_examples
1113
- total_num_bytes += num_bytes
1114
- shard_id += 1
1115
- writer = writer_class(
1116
- features=writer._features,
1117
- path=fpath.replace("SSSSS", f"{shard_id:05d}").replace("JJJJJ", f"{job_id:05d}"),
1118
- writer_batch_size=self._writer_batch_size,
1119
- storage_options=self._fs.storage_options,
1120
- embed_local_files=embed_local_files,
1121
- )
1122
- try:
1123
- > writer.write_table(table)
1124
-
1125
- .venv/lib/python3.12/site-packages/datasets/builder.py:1870:
1126
- _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
1127
- .venv/lib/python3.12/site-packages/datasets/arrow_writer.py:627: in write_table
1128
- self.pa_writer.write_table(pa_table, writer_batch_size)
1129
- pyarrow/ipc.pxi:529: in pyarrow.lib._CRecordBatchWriter.write_table
1130
- ???
1131
- pyarrow/error.pxi:89: in pyarrow.lib.check_status
1132
- ???
1133
- _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
1134
-
1135
- self = <fsspec.implementations.local.LocalFileOpener object at 0x113e85810>
1136
- args = (<pyarrow.Buffer address=0x5e3c8020000 size=95688808 is_cpu=True is_mutable=True>,)
1137
- kwargs = {}
1138
-
1139
- def write(self, *args, **kwargs):
1140
- > return self.f.write(*args, **kwargs)
1141
- E OSError: [Errno 28] No space left on device
1142
-
1143
- .venv/lib/python3.12/site-packages/fsspec/implementations/local.py:426: OSError
1144
-
1145
- The above exception was the direct cause of the following exception:
1146
-
1147
- dataset_name = 'hest'
1148
-
1149
- @pytest.mark.parametrize("dataset_name", DATASET_NAMES)
1150
- # @pytest.mark.skip("This tests currently fails")
1151
- def test_no_one_word_documents(dataset_name: str):
1152
- > ds = load_dataset(str(repo_path.resolve()), dataset_name, split="train")
1153
-
1154
- src/tests/test_quality/test_short_texts.py:14:
1155
- _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
1156
- .venv/lib/python3.12/site-packages/datasets/load.py:2151: in load_dataset
1157
- builder_instance.download_and_prepare(
1158
- .venv/lib/python3.12/site-packages/datasets/builder.py:924: in download_and_prepare
1159
- self._download_and_prepare(
1160
- .venv/lib/python3.12/site-packages/datasets/builder.py:1000: in _download_and_prepare
1161
- self._prepare_split(split_generator, **prepare_split_kwargs)
1162
- .venv/lib/python3.12/site-packages/datasets/builder.py:1741: in _prepare_split
1163
- for job_id, done, content in self._prepare_split_single(
1164
- _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
1165
-
1166
- self = <datasets.packaged_modules.parquet.parquet.ParquetDanish-dynaword object at 0x118b3f1a0>
1167
- gen_kwargs = {'files': tracked_list(current=FilesIterable(current=/Users/au561649/Github/danish-dynaword/data/hest/hest.parquet))}
1168
- fpath = '/Users/au561649/.cache/huggingface/datasets/danish-dynaword/hest/0.0.0/5055500453bef830.incomplete/danish-dynaword-train-JJJJJ-SSSSS-of-NNNNN.arrow'
1169
- file_format = 'arrow', max_shard_size = 500000000, job_id = 0
1170
-
1171
- def _prepare_split_single(
1172
- self, gen_kwargs: dict, fpath: str, file_format: str, max_shard_size: int, job_id: int
1173
- ) -> Iterable[Tuple[int, bool, Union[int, tuple]]]:
1174
- gen_kwargs = {k: tracked_list(v) if isinstance(v, list) else v for k, v in gen_kwargs.items()}
1175
- generator = self._generate_tables(**gen_kwargs)
1176
- writer_class = ParquetWriter if file_format == "parquet" else ArrowWriter
1177
- embed_local_files = file_format == "parquet"
1178
- shard_lengths = []
1179
- total_num_examples, total_num_bytes = 0, 0
1180
-
1181
- shard_id = 0
1182
- num_examples_progress_update = 0
1183
- try:
1184
- writer = writer_class(
1185
- features=self.info.features,
1186
- path=fpath.replace("SSSSS", f"{shard_id:05d}").replace("JJJJJ", f"{job_id:05d}"),
1187
- writer_batch_size=self._writer_batch_size,
1188
- storage_options=self._fs.storage_options,
1189
- embed_local_files=embed_local_files,
1190
- )
1191
- try:
1192
- _time = time.time()
1193
- for _, table in generator:
1194
- if max_shard_size is not None and writer._num_bytes > max_shard_size:
1195
- num_examples, num_bytes = writer.finalize()
1196
- writer.close()
1197
- shard_lengths.append(num_examples)
1198
- total_num_examples += num_examples
1199
- total_num_bytes += num_bytes
1200
- shard_id += 1
1201
- writer = writer_class(
1202
- features=writer._features,
1203
- path=fpath.replace("SSSSS", f"{shard_id:05d}").replace("JJJJJ", f"{job_id:05d}"),
1204
- writer_batch_size=self._writer_batch_size,
1205
- storage_options=self._fs.storage_options,
1206
- embed_local_files=embed_local_files,
1207
- )
1208
- try:
1209
- writer.write_table(table)
1210
- except CastError as cast_error:
1211
- raise DatasetGenerationCastError.from_cast_error(
1212
- cast_error=cast_error,
1213
- builder_name=self.info.builder_name,
1214
- gen_kwargs=gen_kwargs,
1215
- token=self.token,
1216
- )
1217
- num_examples_progress_update += len(table)
1218
- if time.time() > _time + config.PBAR_REFRESH_TIME_INTERVAL:
1219
- _time = time.time()
1220
- yield job_id, False, num_examples_progress_update
1221
- num_examples_progress_update = 0
1222
- finally:
1223
- yield job_id, False, num_examples_progress_update
1224
- num_shards = shard_id + 1
1225
- num_examples, num_bytes = writer.finalize()
1226
- writer.close()
1227
- shard_lengths.append(num_examples)
1228
- total_num_examples += num_examples
1229
- total_num_bytes += num_bytes
1230
- except Exception as e:
1231
- # Ignore the writer's error for no examples written to the file if this error was caused by the error in _generate_examples before the first example was yielded
1232
- if isinstance(e, SchemaInferenceError) and e.__context__ is not None:
1233
- e = e.__context__
1234
- if isinstance(e, DatasetGenerationError):
1235
- raise
1236
- > raise DatasetGenerationError("An error occurred while generating the dataset") from e
1237
- E datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset
1238
-
1239
- .venv/lib/python3.12/site-packages/datasets/builder.py:1897: DatasetGenerationError
1240
- ----------------------------- Captured stderr call -----------------------------
1241
-
1242
- __________________________ test_ensure_ids_are_unique __________________________
1243
-
1244
- self = <datasets.packaged_modules.parquet.parquet.ParquetDanish-dynaword object at 0x113ec1970>
1245
- gen_kwargs = {'files': tracked_list(current=FilesIterable(current=/Users/au561649/Github/danish-dynaword/data/cellar/cellar.parquet))}
1246
- fpath = '/Users/au561649/.cache/huggingface/datasets/danish-dynaword/default/0.0.0/5055500453bef830.incomplete/danish-dynaword-train-JJJJJ-SSSSS-of-NNNNN.arrow'
1247
- file_format = 'arrow', max_shard_size = 500000000, job_id = 0
1248
-
1249
- def _prepare_split_single(
1250
- self, gen_kwargs: dict, fpath: str, file_format: str, max_shard_size: int, job_id: int
1251
- ) -> Iterable[Tuple[int, bool, Union[int, tuple]]]:
1252
- gen_kwargs = {k: tracked_list(v) if isinstance(v, list) else v for k, v in gen_kwargs.items()}
1253
- generator = self._generate_tables(**gen_kwargs)
1254
- writer_class = ParquetWriter if file_format == "parquet" else ArrowWriter
1255
- embed_local_files = file_format == "parquet"
1256
- shard_lengths = []
1257
- total_num_examples, total_num_bytes = 0, 0
1258
-
1259
- shard_id = 0
1260
- num_examples_progress_update = 0
1261
- try:
1262
- writer = writer_class(
1263
- features=self.info.features,
1264
- path=fpath.replace("SSSSS", f"{shard_id:05d}").replace("JJJJJ", f"{job_id:05d}"),
1265
- writer_batch_size=self._writer_batch_size,
1266
- storage_options=self._fs.storage_options,
1267
- embed_local_files=embed_local_files,
1268
- )
1269
- try:
1270
- _time = time.time()
1271
- for _, table in generator:
1272
- if max_shard_size is not None and writer._num_bytes > max_shard_size:
1273
- num_examples, num_bytes = writer.finalize()
1274
- writer.close()
1275
- shard_lengths.append(num_examples)
1276
- total_num_examples += num_examples
1277
- total_num_bytes += num_bytes
1278
- shard_id += 1
1279
- writer = writer_class(
1280
- features=writer._features,
1281
- path=fpath.replace("SSSSS", f"{shard_id:05d}").replace("JJJJJ", f"{job_id:05d}"),
1282
- writer_batch_size=self._writer_batch_size,
1283
- storage_options=self._fs.storage_options,
1284
- embed_local_files=embed_local_files,
1285
- )
1286
- try:
1287
- > writer.write_table(table)
1288
-
1289
- .venv/lib/python3.12/site-packages/datasets/builder.py:1870:
1290
- _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
1291
- .venv/lib/python3.12/site-packages/datasets/arrow_writer.py:627: in write_table
1292
- self.pa_writer.write_table(pa_table, writer_batch_size)
1293
- pyarrow/ipc.pxi:529: in pyarrow.lib._CRecordBatchWriter.write_table
1294
- ???
1295
- pyarrow/error.pxi:89: in pyarrow.lib.check_status
1296
- ???
1297
- _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
1298
-
1299
- self = <fsspec.implementations.local.LocalFileOpener object at 0x113aaffd0>
1300
- args = (<pyarrow.Buffer address=0x5e500020000 size=81139164 is_cpu=True is_mutable=True>,)
1301
- kwargs = {}
1302
-
1303
- def write(self, *args, **kwargs):
1304
- > return self.f.write(*args, **kwargs)
1305
- E OSError: [Errno 28] No space left on device
1306
-
1307
- .venv/lib/python3.12/site-packages/fsspec/implementations/local.py:426: OSError
1308
-
1309
- The above exception was the direct cause of the following exception:
1310
-
1311
- def test_ensure_ids_are_unique():
1312
- name = str(repo_path.resolve())
1313
- > ds = load_dataset(name, split="train")
1314
-
1315
- src/tests/test_unique_ids.py:11:
1316
- _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
1317
- .venv/lib/python3.12/site-packages/datasets/load.py:2151: in load_dataset
1318
- builder_instance.download_and_prepare(
1319
- .venv/lib/python3.12/site-packages/datasets/builder.py:924: in download_and_prepare
1320
- self._download_and_prepare(
1321
- .venv/lib/python3.12/site-packages/datasets/builder.py:1000: in _download_and_prepare
1322
- self._prepare_split(split_generator, **prepare_split_kwargs)
1323
- .venv/lib/python3.12/site-packages/datasets/builder.py:1741: in _prepare_split
1324
- for job_id, done, content in self._prepare_split_single(
1325
- _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
1326
-
1327
- self = <datasets.packaged_modules.parquet.parquet.ParquetDanish-dynaword object at 0x113ec1970>
1328
- gen_kwargs = {'files': tracked_list(current=FilesIterable(current=/Users/au561649/Github/danish-dynaword/data/cellar/cellar.parquet))}
1329
- fpath = '/Users/au561649/.cache/huggingface/datasets/danish-dynaword/default/0.0.0/5055500453bef830.incomplete/danish-dynaword-train-JJJJJ-SSSSS-of-NNNNN.arrow'
1330
- file_format = 'arrow', max_shard_size = 500000000, job_id = 0
1331
-
1332
- def _prepare_split_single(
1333
- self, gen_kwargs: dict, fpath: str, file_format: str, max_shard_size: int, job_id: int
1334
- ) -> Iterable[Tuple[int, bool, Union[int, tuple]]]:
1335
- gen_kwargs = {k: tracked_list(v) if isinstance(v, list) else v for k, v in gen_kwargs.items()}
1336
- generator = self._generate_tables(**gen_kwargs)
1337
- writer_class = ParquetWriter if file_format == "parquet" else ArrowWriter
1338
- embed_local_files = file_format == "parquet"
1339
- shard_lengths = []
1340
- total_num_examples, total_num_bytes = 0, 0
1341
-
1342
- shard_id = 0
1343
- num_examples_progress_update = 0
1344
- try:
1345
- writer = writer_class(
1346
- features=self.info.features,
1347
- path=fpath.replace("SSSSS", f"{shard_id:05d}").replace("JJJJJ", f"{job_id:05d}"),
1348
- writer_batch_size=self._writer_batch_size,
1349
- storage_options=self._fs.storage_options,
1350
- embed_local_files=embed_local_files,
1351
- )
1352
- try:
1353
- _time = time.time()
1354
- for _, table in generator:
1355
- if max_shard_size is not None and writer._num_bytes > max_shard_size:
1356
- num_examples, num_bytes = writer.finalize()
1357
- writer.close()
1358
- shard_lengths.append(num_examples)
1359
- total_num_examples += num_examples
1360
- total_num_bytes += num_bytes
1361
- shard_id += 1
1362
- writer = writer_class(
1363
- features=writer._features,
1364
- path=fpath.replace("SSSSS", f"{shard_id:05d}").replace("JJJJJ", f"{job_id:05d}"),
1365
- writer_batch_size=self._writer_batch_size,
1366
- storage_options=self._fs.storage_options,
1367
- embed_local_files=embed_local_files,
1368
- )
1369
- try:
1370
- writer.write_table(table)
1371
- except CastError as cast_error:
1372
- raise DatasetGenerationCastError.from_cast_error(
1373
- cast_error=cast_error,
1374
- builder_name=self.info.builder_name,
1375
- gen_kwargs=gen_kwargs,
1376
- token=self.token,
1377
- )
1378
- num_examples_progress_update += len(table)
1379
- if time.time() > _time + config.PBAR_REFRESH_TIME_INTERVAL:
1380
- _time = time.time()
1381
- yield job_id, False, num_examples_progress_update
1382
- num_examples_progress_update = 0
1383
- finally:
1384
- yield job_id, False, num_examples_progress_update
1385
- num_shards = shard_id + 1
1386
- num_examples, num_bytes = writer.finalize()
1387
- writer.close()
1388
- shard_lengths.append(num_examples)
1389
- total_num_examples += num_examples
1390
- total_num_bytes += num_bytes
1391
- except Exception as e:
1392
- # Ignore the writer's error for no examples written to the file if this error was caused by the error in _generate_examples before the first example was yielded
1393
- if isinstance(e, SchemaInferenceError) and e.__context__ is not None:
1394
- e = e.__context__
1395
- if isinstance(e, DatasetGenerationError):
1396
- raise
1397
- > raise DatasetGenerationError("An error occurred while generating the dataset") from e
1398
- E datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset
1399
-
1400
- .venv/lib/python3.12/site-packages/datasets/builder.py:1897: DatasetGenerationError
1401
- ----------------------------- Captured stderr call -----------------------------
1402
-
1403
-
1404
  =============================== warnings summary ===============================
1405
- src/tests/test_quality/test_short_texts.py: 33 warnings
1406
  /Users/au561649/Github/danish-dynaword/.venv/lib/python3.12/site-packages/datasets/utils/_dill.py:385: DeprecationWarning: co_lnotab is deprecated, use co_lines instead.
1407
 
1408
  -- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
1409
- =========================== short test summary info ============================
1410
- FAILED src/tests/test_quality/test_duplicates.py::test_no_within_data_duplicates[ep]
1411
- FAILED src/tests/test_quality/test_duplicates.py::test_no_within_data_duplicates[ft]
1412
- FAILED src/tests/test_quality/test_duplicates.py::test_no_within_data_duplicates[tv2r]
1413
- FAILED src/tests/test_quality/test_duplicates.py::test_no_within_data_duplicates[hest]
1414
- FAILED src/tests/test_quality/test_short_texts.py::test_no_one_word_documents[ep]
1415
- FAILED src/tests/test_quality/test_short_texts.py::test_no_one_word_documents[ft]
1416
- FAILED src/tests/test_quality/test_short_texts.py::test_no_one_word_documents[hest]
1417
- FAILED src/tests/test_unique_ids.py::test_ensure_ids_are_unique - datasets.ex...
1418
- ====== 8 failed, 319 passed, 1 skipped, 33 warnings in 365.20s (0:06:05) =======
 
11
  ........................................................................ [ 57%]
12
  ................................................................. [ 76%]
13
  src/tests/test_load.py .. [ 77%]
14
+ src/tests/test_quality/test_duplicates.py .............................. [ 86%]
15
  ......s [ 88%]
16
+ src/tests/test_quality/test_short_texts.py ............................. [ 97%]
17
  ....... [ 99%]
18
+ src/tests/test_unique_ids.py . [100%]
19
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
20
  =============================== warnings summary ===============================
21
+ src/tests/test_quality/test_short_texts.py: 36 warnings
22
  /Users/au561649/Github/danish-dynaword/.venv/lib/python3.12/site-packages/datasets/utils/_dill.py:385: DeprecationWarning: co_lnotab is deprecated, use co_lines instead.
23
 
24
  -- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
25
+ ================= 327 passed, 1 skipped, 36 warnings in 53.74s =================
 
 
 
 
 
 
 
 
 
uv.lock CHANGED
The diff for this file is too large to render. See raw diff