Datasets:
metadata
license: cc-by-sa-4.0
language:
- bar
pretty_name: Bavarian Wikipedia Dump
size_categories:
- 100K<n<1M
Bavarian Wikipedia Dump
This datasets hosts a recent Bavarian Wikipedia Dump that is used for various experiments within the Bavarian NLP organization.
Dataset Creation
The latest dump was downloaded with:
wget https://dumps.wikimedia.org/barwiki/20250620/barwiki-20250620-pages-articles.xml.bz2
Then a patched version of WikiExtractor was used (with Python 3.12.3 - newer versions were not working) to extract all articles into JSONL:
python3 -m wikiextractor.WikiExtractor --json --no-templates -o - barwiki-20250620-pages-articles.xml.bz2 > bar_wikipedia.jsonl
The final JSONL file was then uploaded here.
Stats
The extracted JSONL file contains 43,911 articles.