kubhist2 / README.md
drvenabili's picture
Update README.md
7c414c9
|
raw
history blame
No virus
11.1 kB
---
dataset_info:
- config_name: '1640'
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 254777
num_examples: 3509
download_size: 114173
dataset_size: 254777
- config_name: '1650'
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 31314
num_examples: 412
download_size: 15122
dataset_size: 31314
- config_name: '1660'
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 56559
num_examples: 726
download_size: 25941
dataset_size: 56559
- config_name: '1670'
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 15093
num_examples: 188
download_size: 8153
dataset_size: 15093
- config_name: '1680'
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1290089
num_examples: 17458
download_size: 609438
dataset_size: 1290089
- config_name: '1690'
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 2977705
num_examples: 42333
download_size: 1355778
dataset_size: 2977705
- config_name: '1700'
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 3800917
num_examples: 53331
download_size: 1702603
dataset_size: 3800917
- config_name: '1710'
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1601983
num_examples: 22763
download_size: 733219
dataset_size: 1601983
- config_name: '1720'
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 2268261
num_examples: 32813
download_size: 1012144
dataset_size: 2268261
- config_name: '1730'
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 5498116
num_examples: 79079
download_size: 2515986
dataset_size: 5498116
- config_name: '1740'
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 10147602
num_examples: 149317
download_size: 4572359
dataset_size: 10147602
- config_name: '1750'
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 14183279
num_examples: 212000
download_size: 6235076
dataset_size: 14183279
- config_name: '1760'
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 34039377
num_examples: 545759
download_size: 15159865
dataset_size: 34039377
- config_name: '1770'
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 89191958
num_examples: 1333609
download_size: 39582304
dataset_size: 89191958
- config_name: '1780'
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 136703541
num_examples: 2015223
download_size: 60960878
dataset_size: 136703541
- config_name: '1790'
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 163823087
num_examples: 2435714
download_size: 72860792
dataset_size: 163823087
- config_name: '1800'
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 220361417
num_examples: 3368887
download_size: 98935407
dataset_size: 220361417
- config_name: '1810'
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 263830012
num_examples: 4205776
download_size: 122219730
dataset_size: 263830012
- config_name: '1820'
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 395727486
num_examples: 6265710
download_size: 175240370
dataset_size: 395727486
- config_name: '1830'
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 580725783
num_examples: 9355635
download_size: 254403662
dataset_size: 580725783
- config_name: '1840'
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 898420001
num_examples: 14051720
download_size: 381018147
dataset_size: 898420001
- config_name: '1850'
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1354049159
num_examples: 21187511
download_size: 570228565
dataset_size: 1354049159
- config_name: '1860'
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 2512543535
num_examples: 39321823
download_size: 1046916115
dataset_size: 2512543535
- config_name: '1870'
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 3383836222
num_examples: 53045312
download_size: 1399880807
dataset_size: 3383836222
- config_name: '1880'
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 4501878144
num_examples: 72015436
download_size: 1827179641
dataset_size: 4501878144
- config_name: '1890'
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 3219902112
num_examples: 52337279
download_size: 1315107645
dataset_size: 3219902112
- config_name: '1900'
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 205822484
num_examples: 3284826
download_size: 84811326
dataset_size: 205822484
- config_name: all
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 7999426267
num_examples: 285384149
download_size: 7483375536
dataset_size: 7999426267
license: cc-by-sa-4.0
task_categories:
- text-generation
language:
- sv
tags:
- newspapers
- historical
size_categories:
- 1B<n<10B
---
# kubhist2
## Dataset Description
- **Homepage: https://changeiskey.org**
- **Repository: https://github.com/ChangeIsKey/kubhist2**
- **Point of Contact: Simon Hengchen / iguanodon.ai**
### Dataset Summary
This is a version of the Kubhist 2 dataset originally created, curated and made available by Språkbanken Text (SBX) at the University of Gothenburg (Sweden) under the CC BY 4.0 license.
This is a a corpus of OCRed newspapers from Sweden spanning the 1640s to the 1900s.
The original data is available with many types of annotation in XML at https://spraakbanken.gu.se/en/resources/kubhist2.
A good description of the original data is available in this blog entry by Dana Dannélls: https://spraakbanken.gu.se/blogg/index.php/2019/09/15/the-kubhist-corpus-of-swedish-newspapers/.
If you use this dataset for academic research, cite it using the provided citation information at the bottom of this page.
In a nutshell, this hugginface dataset version offers:
- only the OCRed text
- available in decadal subsets
- one line per sentence, sentences shorter than 4 words were discarded
In total this dataset contains 2,819,065,590 tokens. A distribution of tokens per decade is available below.
License is CC BY 4.0 ShareAlike.
```bash
(env) simon@terminus:/mnt/user/cik/kubhist2 wc -w text/*/*.txt
39348 text/1640/1640.txt
4700 text/1650/1650.txt
8524 text/1660/1660.txt
2396 text/1670/1670.txt
199670 text/1680/1680.txt
487943 text/1690/1690.txt
619884 text/1700/1700.txt
265930 text/1710/1710.txt
355759 text/1720/1720.txt
856218 text/1730/1730.txt
1589508 text/1740/1740.txt
2211316 text/1750/1750.txt
5496545 text/1760/1760.txt
14434932 text/1770/1770.txt
22366170 text/1780/1780.txt
26768856 text/1790/1790.txt
36225842 text/1800/1800.txt
44510588 text/1810/1810.txt
65571094 text/1820/1820.txt
95359730 text/1830/1830.txt
143992956 text/1840/1840.txt
214538699 text/1850/1850.txt
392672066 text/1860/1860.txt
524802728 text/1870/1870.txt
695859650 text/1880/1880.txt
498244203 text/1890/1890.txt
31580335 text/1900/1900.txt
2819065590 total
```
### Languages
Swedish (nysvenska)
## Dataset Structure
One feature: `text`.
Load the whole corpus using
```python
dataset = load_dataset("ChangeIsKey/kubhist2")
```
or a decadal subset using
```python
dataset = load_dataset("ChangeIsKey/kubhist2", "decade")
```
The `decade` must be a string, valid values are within `range(1640, 1910, 10)`.
You can combine several decades using `concatenate_datasets` like this:
```python
from datasets import load_dataset, concatenate_datasets
ds_1800 = load_dataset("ChangeIsKey/kubhist2", "1800")
ds_1810 = load_dataset("ChangeIsKey/kubhist2", "1810")
ds_1820 = load_dataset("ChangeIsKey/kubhist2", "1820")
ds_1800_1820 = concatenate_datasets([
ds_1800["train"],
ds_1810["train"],
ds_1820["train"]
])
```
### Data Splits
The dataset has only one split, `train`.
## Dataset Creation
### Curation Rationale
The original data is in a highly-annotated XML format not ideally suited for basic NLP tasks such as unsupervised language modeling: information such as page numbers, fonts, etc. is less relevant and has thus been discarded.
Keeping only the running text of the newspaper and removing sentences shorter than 4 words further allows a 150x data size reduction (2.4TB --> 16GB).
### Source Data
The original data is available with many types of annotation in XML at https://spraakbanken.gu.se/en/resources/kubhist2.
#### Initial Data Collection and Normalization
See on Språkbanken Text's website.
#### Who are the source language producers?
Språkbanken Text: https://spraakbanken.gu.se/en/
### Personal and Sensitive Information
This is historical newspaper data, with the latest data published in 1909. Everyone mentioned in this dataset was probably already a public figure, and has been dead for a while.
## Considerations for Using the Data
### Discussion of Biases
This is historical data. As such, outdated views might be present in the data.
### Other Known Limitations
The data comes from an OCR process. The text is thus not perfect, especially so in the earlier decades.
## Additional Information
### Dataset Curators
This huggingface version of the data has been created by Simon Hengchen.
### Licensing Information
Creative Commons Attribution Share Alike 4.0: https://creativecommons.org/licenses/by-sa/4.0/
### Citation Information
You should always cite the original kubhist2 release, provided below as bibtex. If you want to additionally refer to this specific version, please also add a link to the huggingface page: https://huggingface.co/datasets/ChangeIsKey/kubhist2.
```bibtex
@misc{Kubhist2,
title = {The Kubhist Corpus, v2},
url = {https://spraakbanken.gu.se/korp/?mode=kubhist},
author = {Spr{\aa}kbanken},
year = {Downloaded in 2019},
organization = {Department of Swedish, University of Gothenburg}
}
```
### Acknowledgments
This dataset has been created in the context of the [ChangeIsKey!](https://www.changeiskey.org/) project funded by Riksbankens Jubileumsfond under reference number M21-0021, Change is Key! program.
The compute dedicated to the creation of the dataset has been provided by [iguanodon.ai](https://iguanodon.ai).
Many thanks got to Språkbanken Text for creating and curating this resource.