Datasets:
docs: add initial version
Browse files
README.md
CHANGED
@@ -1,3 +1,32 @@
|
|
1 |
-
---
|
2 |
-
license: cc-by-sa-4.0
|
3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: cc-by-sa-4.0
|
3 |
+
language:
|
4 |
+
- bar
|
5 |
+
pretty_name: Bavarian Wikipedia Dump
|
6 |
+
size_categories:
|
7 |
+
- 100K<n<1M
|
8 |
+
---
|
9 |
+
|
10 |
+
# Bavarian Wikipedia Dump
|
11 |
+
|
12 |
+
This datasets hosts a recent Bavarian Wikipedia Dump that is used for various experiments within the [Bavarian NLP](https://huggingface.co/bavarian-nlp) organization.
|
13 |
+
|
14 |
+
## Dataset Creation
|
15 |
+
|
16 |
+
The latest dump was downloaded with:
|
17 |
+
|
18 |
+
```bash
|
19 |
+
wget https://dumps.wikimedia.org/barwiki/20250720/barwiki-20250720-pages-articles.xml.bz2
|
20 |
+
```
|
21 |
+
|
22 |
+
Then a patched version of [WikiExtractor](https://github.com/attardi/wikiextractor) was used (with Python 3.12.3 - newer versions were not working) to extract all articles into JSONL:
|
23 |
+
|
24 |
+
```bash
|
25 |
+
python3 -m wikiextractor.WikiExtractor --json --no-templates -o - barwiki-20250720-pages-articles.xml.bz2 > bar_wikipedia.jsonl
|
26 |
+
```
|
27 |
+
|
28 |
+
The final JSONL file was then uploaded here.
|
29 |
+
|
30 |
+
## Stats
|
31 |
+
|
32 |
+
The extracted JSONL file contains 43,917 articles.
|