Datasets:
Tasks:
Text Generation
Formats:
parquet
Sub-tasks:
language-modeling
Languages:
Danish
Size:
10M - 100M
ArXiv:
DOI:
License:
Wikipedia Comments
#78
by
robvanderg
- opened
- CHANGELOG.md +6 -0
- CONTRIBUTING.md +7 -1
- README.md +5 -1
- data/wiki-comments/create.py +119 -0
- data/wiki-comments/descriptive_stats.json +6 -0
- data/wiki-comments/images/dist_document_length.png +3 -0
- data/wiki-comments/wiki-comments.md +97 -0
- data/wiki-comments/wiki-comments.parquet +3 -0
- pyproject.toml +1 -1
- test_results.log +12 -13
CHANGELOG.md
CHANGED
@@ -5,6 +5,12 @@ All notable changes to this project will be documented in this file.
|
|
5 |
|
6 |
The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
|
7 |
|
|
|
|
|
|
|
|
|
|
|
|
|
8 |
## [v1.2.5] - 2025-07-08
|
9 |
|
10 |
### Added
|
|
|
5 |
|
6 |
The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
|
7 |
|
8 |
+
## [v1.2.6] - 2025-07-21
|
9 |
+
|
10 |
+
### Added
|
11 |
+
|
12 |
+
- Added the `wiki-comments` dataset.
|
13 |
+
|
14 |
## [v1.2.5] - 2025-07-08
|
15 |
|
16 |
### Added
|
CONTRIBUTING.md
CHANGED
@@ -42,6 +42,12 @@ This repo comes with a few dependencies you need to install to make this run. It
|
|
42 |
make install
|
43 |
```
|
44 |
|
|
|
|
|
|
|
|
|
|
|
|
|
45 |
## Running dataset tests
|
46 |
|
47 |
This dataset is special as it comes with a test suite, e.g. testing in the ids are unique and that the format is consistent. You can run the suite using
|
@@ -96,4 +102,4 @@ We will for instance examine the quality of the synthetic subset and whether the
|
|
96 |
|
97 |
### Do you accept non-Danish data
|
98 |
|
99 |
-
Generally this repository is intended for Danish text, however quite broadly defined. For instance, we do accept data containing [code-switching](https://www.google.com/search?client=safari&rls=en&q=code+switching&ie=UTF-8&oe=UTF-8) and historical Danish text.
|
|
|
42 |
make install
|
43 |
```
|
44 |
|
45 |
+
Now you can activate the environment with:
|
46 |
+
|
47 |
+
```
|
48 |
+
source .venv/bin/activate
|
49 |
+
```
|
50 |
+
|
51 |
## Running dataset tests
|
52 |
|
53 |
This dataset is special as it comes with a test suite, e.g. testing in the ids are unique and that the format is consistent. You can run the suite using
|
|
|
102 |
|
103 |
### Do you accept non-Danish data
|
104 |
|
105 |
+
Generally this repository is intended for Danish text, however quite broadly defined. For instance, we do accept data containing [code-switching](https://www.google.com/search?client=safari&rls=en&q=code+switching&ie=UTF-8&oe=UTF-8) and historical Danish text.
|
README.md
CHANGED
@@ -129,6 +129,10 @@ configs:
|
|
129 |
data_files:
|
130 |
- split: train
|
131 |
path: data/wiki/*.parquet
|
|
|
|
|
|
|
|
|
132 |
- config_name: nordjyllandnews
|
133 |
data_files:
|
134 |
- split: train
|
@@ -182,7 +186,7 @@ https://github.com/huggingface/datasets/blob/main/templates/README_guide.md
|
|
182 |
<!-- START README TABLE -->
|
183 |
| | |
|
184 |
| ------------ | ----------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
185 |
-
| **Version** | 1.2.
|
186 |
| **Language** | dan, dansk, Danish |
|
187 |
| **License** | Openly Licensed, See the respective dataset |
|
188 |
| **Models** | For model trained used this data see [danish-foundation-models](https://huggingface.co/danish-foundation-models) |
|
|
|
129 |
data_files:
|
130 |
- split: train
|
131 |
path: data/wiki/*.parquet
|
132 |
+
- config_name: wiki-comments
|
133 |
+
data_files:
|
134 |
+
- split: train
|
135 |
+
path: data/wiki-comments/*.parquet
|
136 |
- config_name: nordjyllandnews
|
137 |
data_files:
|
138 |
- split: train
|
|
|
186 |
<!-- START README TABLE -->
|
187 |
| | |
|
188 |
| ------------ | ----------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
189 |
+
| **Version** | 1.2.6 ([Changelog](/CHANGELOG.md)) |
|
190 |
| **Language** | dan, dansk, Danish |
|
191 |
| **License** | Openly Licensed, See the respective dataset |
|
192 |
| **Models** | For model trained used this data see [danish-foundation-models](https://huggingface.co/danish-foundation-models) |
|
data/wiki-comments/create.py
ADDED
@@ -0,0 +1,119 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# /// script
|
2 |
+
# requires-python = "==3.12"
|
3 |
+
# dependencies = [
|
4 |
+
# "datasets==3.2.0",
|
5 |
+
# "dynaword",
|
6 |
+
# "fasttext",
|
7 |
+
# "huggingface_hub"
|
8 |
+
# ]
|
9 |
+
# [tool.uv.sources]
|
10 |
+
# dynaword = { git = "https://huggingface.co/datasets/danish-foundation-models/danish-dynaword", rev = "6b3822fd6965dda59ae361da99c19b5c56c1263f" }
|
11 |
+
# ///
|
12 |
+
"""
|
13 |
+
This script downloads and cleans Wikipedia, it only keeps the user comments.
|
14 |
+
"""
|
15 |
+
import html
|
16 |
+
import json
|
17 |
+
import os
|
18 |
+
import requests
|
19 |
+
import sys
|
20 |
+
|
21 |
+
from datasets import IterableDataset
|
22 |
+
from datasets import Dataset
|
23 |
+
import fasttext
|
24 |
+
from huggingface_hub import hf_hub_download
|
25 |
+
|
26 |
+
from dynaword.process_dataset import (
|
27 |
+
add_token_count,
|
28 |
+
ensure_column_order,
|
29 |
+
remove_duplicate_text,
|
30 |
+
remove_empty_texts,
|
31 |
+
)
|
32 |
+
|
33 |
+
|
34 |
+
def run_cmd(cmd):
|
35 |
+
print(cmd)
|
36 |
+
os.system(cmd)
|
37 |
+
|
38 |
+
def download_data(lang, date):
|
39 |
+
filename = lang + 'wiki-' + date + '-pages-articles.xml.bz2'
|
40 |
+
if not os.path.isfile(filename):
|
41 |
+
url = 'https://dumps.wikimedia.org/' + lang + 'wiki/' + date + '/' + filename
|
42 |
+
response = requests.get(url)
|
43 |
+
|
44 |
+
with open(filename, "wb") as file:
|
45 |
+
file.write(response.content)
|
46 |
+
|
47 |
+
return filename
|
48 |
+
|
49 |
+
def install_wikiextractor():
|
50 |
+
if not os.path.isdir('wikiextractor'):
|
51 |
+
cmd = 'git clone https://github.com/robvanderg/wikiextractor.git'
|
52 |
+
run_cmd(cmd)
|
53 |
+
|
54 |
+
def run_wikiextractor(in_path, out_path):
|
55 |
+
# clean the data
|
56 |
+
if not os.path.isdir('wikiextractor/' + out_path):
|
57 |
+
cmd = 'cd wikiextractor && python3 -m wikiextractor.WikiExtractor ../' + in_path + ' -o ' + out_path + ' --get_misc --json && cd ../ '
|
58 |
+
run_cmd(cmd)
|
59 |
+
|
60 |
+
def read_and_clean(path):
|
61 |
+
# Load fasttext model
|
62 |
+
model_path = hf_hub_download(repo_id="facebook/fasttext-language-identification", filename="model.bin")
|
63 |
+
fasttext_model = fasttext.load_model(model_path)
|
64 |
+
|
65 |
+
comment_id = 0
|
66 |
+
all_rows = []
|
67 |
+
for (root,dirs,files) in os.walk(path, topdown=True):
|
68 |
+
for file in files:
|
69 |
+
path = os.path.join(root, file)
|
70 |
+
for line in open(path):
|
71 |
+
linedata = json.loads(line)
|
72 |
+
title = linedata['title']
|
73 |
+
category = title.split(':')[0]
|
74 |
+
if category == 'Wikipedia':
|
75 |
+
if title.startswith('Wikipedia:Dagens '):
|
76 |
+
continue
|
77 |
+
id = 'wikicomment_' + str(comment_id)
|
78 |
+
comment_id += 1
|
79 |
+
else: # There is more data, but we just want to comments for now
|
80 |
+
continue
|
81 |
+
source = 'wiki_misc'
|
82 |
+
# TODO add linedata['url'] somewhere?
|
83 |
+
text = html.unescape(linedata['text'])
|
84 |
+
lines = line.split('\n')
|
85 |
+
filtered_text = ''
|
86 |
+
for line in text.split('\n'):
|
87 |
+
if '{{}}' in text: # unresolved templates
|
88 |
+
continue
|
89 |
+
|
90 |
+
lang_pred = fasttext_model.predict(line)
|
91 |
+
if lang_pred[0][0] == '__label__dan_Latn' and lang_pred[1][0] > .5:
|
92 |
+
filtered_text += line + '\n'
|
93 |
+
added = '2025-07-21'
|
94 |
+
created = '2002-02-01, 2025-07-20'
|
95 |
+
row = {"id": id, "text": filtered_text, "source": source, "added": added, "created": created}
|
96 |
+
all_rows.append(row)
|
97 |
+
return all_rows
|
98 |
+
|
99 |
+
|
100 |
+
if __name__ == '__main__':
|
101 |
+
date = '20250720' # obtained from https://dumps.wikimedia.org/dawiki/
|
102 |
+
lang = 'da'
|
103 |
+
bz2_path = download_data(lang, date)
|
104 |
+
|
105 |
+
install_wikiextractor()
|
106 |
+
data_folder = lang + 'wiki-misc-' + date
|
107 |
+
run_wikiextractor(bz2_path, data_folder)
|
108 |
+
|
109 |
+
full_data = read_and_clean('wikiextractor/' + data_folder)
|
110 |
+
|
111 |
+
ds = Dataset.from_list(full_data)
|
112 |
+
|
113 |
+
ds = remove_empty_texts(ds)
|
114 |
+
ds = remove_duplicate_text(ds)
|
115 |
+
ds = add_token_count(ds)
|
116 |
+
ds = ensure_column_order(ds)
|
117 |
+
|
118 |
+
ds.to_parquet('data/wiki-comments/wiki-comments.parquet')
|
119 |
+
|
data/wiki-comments/descriptive_stats.json
ADDED
@@ -0,0 +1,6 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"number_of_samples": 12462,
|
3 |
+
"average_document_length": 2101.6405071417107,
|
4 |
+
"number_of_tokens": 8805990,
|
5 |
+
"revision": "6b3822fd6965dda59ae361da99c19b5c56c1263f"
|
6 |
+
}
|
data/wiki-comments/images/dist_document_length.png
ADDED
![]() |
Git LFS Details
|
data/wiki-comments/wiki-comments.md
ADDED
@@ -0,0 +1,97 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
pretty_name: Wikipedia Comments
|
3 |
+
language:
|
4 |
+
- da
|
5 |
+
license: cc0-1.0
|
6 |
+
license_name: CC-0
|
7 |
+
size_categories:
|
8 |
+
- 100k-1M
|
9 |
+
task_categories:
|
10 |
+
- text-generation
|
11 |
+
- fill-mask
|
12 |
+
task_ids:
|
13 |
+
- language-modeling
|
14 |
+
domains:
|
15 |
+
- Encyclopedic
|
16 |
+
---
|
17 |
+
|
18 |
+
# Dataset Card for Wikipedia Comments
|
19 |
+
|
20 |
+
<!-- START-SHORT DESCRIPTION -->
|
21 |
+
Text from the comments sections of the Danish Wikipedia.
|
22 |
+
<!-- END-SHORT DESCRIPTION -->
|
23 |
+
|
24 |
+
|
25 |
+
You can read more about the wikipedia on their [about](https://da.wikipedia.org/wiki/Hj%C3%A6lp:Om) page.
|
26 |
+
|
27 |
+
## Dataset Description
|
28 |
+
|
29 |
+
|
30 |
+
<!-- START-DESC-STATS -->
|
31 |
+
- **Language**: dan, dansk, Danish
|
32 |
+
- **Domains**: Encyclopedic
|
33 |
+
- **Number of samples**: 12.46K
|
34 |
+
- **Number of tokens (Llama 3)**: 8.81M
|
35 |
+
- **Average document length (characters)**: 2101.64
|
36 |
+
<!-- END-DESC-STATS -->
|
37 |
+
|
38 |
+
|
39 |
+
|
40 |
+
## Dataset Structure
|
41 |
+
An example from the dataset looks as follows.
|
42 |
+
|
43 |
+
|
44 |
+
<!-- START-SAMPLE -->
|
45 |
+
```py
|
46 |
+
{
|
47 |
+
"id": "wikicomment_0",
|
48 |
+
"text": " A B C D E F G H I J K L M N O P Q R S T U V W X Y Z Æ Ø Å\nDette er en noget-nær-alfabetisk liste ov[...]",
|
49 |
+
"source": "wiki_misc",
|
50 |
+
"added": "2025-07-21",
|
51 |
+
"created": "2002-02-01, 2025-07-20",
|
52 |
+
"token_count": 161
|
53 |
+
}
|
54 |
+
```
|
55 |
+
|
56 |
+
### Data Fields
|
57 |
+
|
58 |
+
An entry in the dataset consists of the following fields:
|
59 |
+
|
60 |
+
- `id` (`str`): An unique identifier for each document.
|
61 |
+
- `text`(`str`): The content of the document.
|
62 |
+
- `source` (`str`): The source of the document (see [Source Data](#source-data)).
|
63 |
+
- `added` (`str`): An date for when the document was added to this collection.
|
64 |
+
- `created` (`str`): An date range for when the document was originally created.
|
65 |
+
- `token_count` (`int`): The number of tokens in the sample computed using the Llama 8B tokenizer
|
66 |
+
<!-- END-SAMPLE -->
|
67 |
+
|
68 |
+
### Dataset Statistics
|
69 |
+
|
70 |
+
<!-- START-DATASET PLOTS -->
|
71 |
+
<p align="center">
|
72 |
+
<img src="./images/dist_document_length.png" width="600" style="margin-right: 10px;" />
|
73 |
+
</p>
|
74 |
+
<!-- END-DATASET PLOTS -->
|
75 |
+
|
76 |
+
|
77 |
+
|
78 |
+
## Additional Information
|
79 |
+
|
80 |
+
This dataset is collected using an adapted version of the [WikiExtractor](https://github.com/attardi/wikiextractor). Rob van der Goot created a fork that allows for extracting additional text from Wiki's. The fork can be found here: [WikiExtractor](https://github.com/robvanderg/wikiextractor.git).
|
81 |
+
|
82 |
+
After inspection of the different outputs, there are multiple categories of files, which can most easily be distinguished through the title field. Below, I list the different categories, their size (number of pages), and what they seem to contain after a manual inspection.
|
83 |
+
|
84 |
+
```
|
85 |
+
71472 Kategori: category overview pages
|
86 |
+
19992 Wikipedia: Comments, but also daily articles
|
87 |
+
2379 Portal: Also monthly articles, and some lists/calendars
|
88 |
+
1360 MediaWiki: About files, contains almost no natural language
|
89 |
+
726 Modul: technical stuff, contains almost no (Danish) text
|
90 |
+
171 Hjælp: help pages; info and comments
|
91 |
+
```
|
92 |
+
|
93 |
+
In the current version of the dataset, we used the titles starting with `Wikipedia:` , and remove the daily articles by leaving out titles starting with "Wikipedia:Dagens".
|
94 |
+
|
95 |
+
In this data we include comments where people discuss things like: content of pages, writing style, which pages/information to include/exclude, etc. It also includes pages written for people that contribute to Wikipedia.
|
96 |
+
|
97 |
+
### Citation Information
|
data/wiki-comments/wiki-comments.parquet
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:64c8b39dcd96ef6ac2bcfb5403cf9cd74f2ee17b954de3478f543922ee8dea5e
|
3 |
+
size 10828848
|
pyproject.toml
CHANGED
@@ -1,6 +1,6 @@
|
|
1 |
[project]
|
2 |
name = "dynaword"
|
3 |
-
version = "1.2.
|
4 |
description = "project code for the danish dynaword project"
|
5 |
readme = "README.md"
|
6 |
requires-python = ">=3.12,<3.13" # 3.13 have issues with spacy and pytorch
|
|
|
1 |
[project]
|
2 |
name = "dynaword"
|
3 |
+
version = "1.2.6"
|
4 |
description = "project code for the danish dynaword project"
|
5 |
readme = "README.md"
|
6 |
requires-python = ">=3.12,<3.13" # 3.13 have issues with spacy and pytorch
|
test_results.log
CHANGED
@@ -1,25 +1,24 @@
|
|
1 |
============================= test session starts ==============================
|
2 |
-
platform
|
3 |
-
rootdir: /
|
4 |
configfile: pyproject.toml
|
5 |
-
|
6 |
-
collected 328 items
|
7 |
|
8 |
src/tests/test_dataset_schema.py ....................................... [ 11%]
|
9 |
-
|
10 |
-
src/tests/test_datasheets.py ........................................... [
|
11 |
-
........................................................................ [
|
12 |
-
|
13 |
src/tests/test_load.py .. [ 77%]
|
14 |
src/tests/test_quality/test_duplicates.py .............................. [ 86%]
|
15 |
-
|
16 |
src/tests/test_quality/test_short_texts.py ............................. [ 97%]
|
17 |
-
|
18 |
src/tests/test_unique_ids.py . [100%]
|
19 |
|
20 |
=============================== warnings summary ===============================
|
21 |
-
src/tests/test_quality/test_short_texts.py:
|
22 |
-
/
|
23 |
|
24 |
-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
|
25 |
-
=================
|
|
|
1 |
============================= test session starts ==============================
|
2 |
+
platform linux -- Python 3.12.3, pytest-8.3.4, pluggy-1.5.0
|
3 |
+
rootdir: /home/rob/Projects/danish-dynaword
|
4 |
configfile: pyproject.toml
|
5 |
+
collected 337 items
|
|
|
6 |
|
7 |
src/tests/test_dataset_schema.py ....................................... [ 11%]
|
8 |
+
................................... [ 21%]
|
9 |
+
src/tests/test_datasheets.py ........................................... [ 34%]
|
10 |
+
........................................................................ [ 56%]
|
11 |
+
...................................................................... [ 76%]
|
12 |
src/tests/test_load.py .. [ 77%]
|
13 |
src/tests/test_quality/test_duplicates.py .............................. [ 86%]
|
14 |
+
.......s [ 88%]
|
15 |
src/tests/test_quality/test_short_texts.py ............................. [ 97%]
|
16 |
+
........ [ 99%]
|
17 |
src/tests/test_unique_ids.py . [100%]
|
18 |
|
19 |
=============================== warnings summary ===============================
|
20 |
+
src/tests/test_quality/test_short_texts.py: 37 warnings
|
21 |
+
/home/rob/Projects/danish-dynaword/.venv/lib/python3.12/site-packages/datasets/utils/_dill.py:385: DeprecationWarning: co_lnotab is deprecated, use co_lines instead.
|
22 |
|
23 |
-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
|
24 |
+
================= 336 passed, 1 skipped, 37 warnings in 44.95s =================
|