fix dataset config
Browse files
README.md
CHANGED
@@ -1,97 +1,102 @@
|
|
1 |
-
---
|
2 |
-
license: cc0-1.0
|
3 |
-
|
4 |
-
|
5 |
-
|
6 |
-
|
7 |
-
|
8 |
-
|
9 |
-
|
10 |
-
|
11 |
-
|
12 |
-
|
13 |
-
|
14 |
-
|
15 |
-
|
16 |
-
|
17 |
-
|
18 |
-
|
19 |
-
|
20 |
-
|
21 |
-
|
22 |
-
|
23 |
-
|
24 |
-
|
25 |
-
|
26 |
-
|
27 |
-
|
28 |
-
|
29 |
-
|
30 |
-
|
31 |
-
|
32 |
-
|
33 |
-
|
34 |
-
"
|
35 |
-
"
|
36 |
-
"
|
37 |
-
"
|
38 |
-
|
39 |
-
|
40 |
-
|
41 |
-
|
42 |
-
|
43 |
-
|
44 |
-
|
45 |
-
|
46 |
-
|
47 |
-
|
48 |
-
|
49 |
-
|
50 |
-
|
51 |
-
|
52 |
-
|
53 |
-
|
54 |
-
|
55 |
-
|
56 |
-
|
57 |
-
|
58 |
-
|
59 |
-
|
60 |
-
|
61 |
-
|
62 |
-
|
63 |
-
|
64 |
-
|
65 |
-
|
66 |
-
|
67 |
-
|
68 |
-
|
69 |
-
|
70 |
-
|
71 |
-
|
72 |
-
|
73 |
-
|
74 |
-
|
75 |
-
|
76 |
-
|
77 |
-
|
78 |
-
|
79 |
-
|
80 |
-
|
81 |
-
|
82 |
-
|
83 |
-
|
84 |
-
|
85 |
-
|
86 |
-
|
87 |
-
|
88 |
-
|
89 |
-
-
|
90 |
-
|
91 |
-
|
92 |
-
|
93 |
-
|
94 |
-
|
95 |
-
|
96 |
-
|
|
|
|
|
|
|
|
|
|
|
97 |
The documents in the dataset are part of the [public domain](https://creativecommons.org/public-domain/)
|
|
|
1 |
+
---
|
2 |
+
license: cc0-1.0
|
3 |
+
configs:
|
4 |
+
- config_name: default
|
5 |
+
data_files:
|
6 |
+
- split: train
|
7 |
+
path: data/*/*.arrow
|
8 |
+
---
|
9 |
+
|
10 |
+
# KB-Books
|
11 |
+
|
12 |
+
|
13 |
+
## Dataset Description
|
14 |
+
|
15 |
+
|
16 |
+
### Dataset Summary
|
17 |
+
|
18 |
+
Documents from the [Royal Danish Library](https://www.kb.dk/en) published between 1750 and 1930.
|
19 |
+
|
20 |
+
The dataset has each page of each document in image and text format. The text was extracted with [OCR](https://en.wikipedia.org/wiki/Optical_character_recognition) at the time of digitization.
|
21 |
+
|
22 |
+
The documents (books of various genres) were obtained from the library in .pdf format with additional metadata as .json files. The dataset was assembled to make these public domain Danish texts more accessible.
|
23 |
+
|
24 |
+
### Languages
|
25 |
+
|
26 |
+
All texts are in Danish.
|
27 |
+
|
28 |
+
## Dataset Structure
|
29 |
+
|
30 |
+
### Data Instances
|
31 |
+
|
32 |
+
```
|
33 |
+
{
|
34 |
+
"doc_id": "unique document identifier",
|
35 |
+
"page_id": "unique page identifier",
|
36 |
+
"page_image":"image of the page, extracted from a pdf",
|
37 |
+
"page_text": "OCRed text of the page, extracted from a pdf",
|
38 |
+
"author" : "name of the author. If more than one, separated by ';' ",
|
39 |
+
"title" : "document title",
|
40 |
+
"published" : "year of publishing",
|
41 |
+
"digitalized" : "year of processing the physical document by the library",
|
42 |
+
"file_name" : "file_name of the original PDF"
|
43 |
+
}
|
44 |
+
```
|
45 |
+
|
46 |
+
The "page_text" was obtained through OCR, and is therefore likely to contain noisy data, especially in older documents, where the original text is either handwritten or printed in exaggerated fonts.
|
47 |
+
|
48 |
+
"author" and "title" may be missing, especially in documents published before 1833.
|
49 |
+
|
50 |
+
"digitalized" may be missing.
|
51 |
+
|
52 |
+
|
53 |
+
### Data Splits
|
54 |
+
|
55 |
+
All data is in the "train" split.
|
56 |
+
Data in [./data](./data/) is organized by year of publication, and is segmented into ~5GB chunks.
|
57 |
+
|
58 |
+
|
59 |
+
## Dataset Creation
|
60 |
+
|
61 |
+
### Curation Rationale
|
62 |
+
The dataset makes public domain text data more accessible to whomever may wish to view it or use it.
|
63 |
+
|
64 |
+
The dataset was created to be used mainly for research purposes and Natural Language Processing tasks.
|
65 |
+
|
66 |
+
The documents were filtered to make sure no non-public domain data is added. See [pd_check.md](./pd_check/pd_check.md) for the confirming of public domain status and [scraping.md](./scrape/scraping.md) for collecting possible Danish authors.
|
67 |
+
|
68 |
+
**IMPORTANT: In case non-public domain data is found in the dataset, please let us know**
|
69 |
+
|
70 |
+
### Source Data
|
71 |
+
|
72 |
+
Data consists of OCRed documents from the [Royal Danish Library](https://www.kb.dk/en) published between 1750 and 1930.
|
73 |
+
These documents are mostly books of various genres. No distinction was made among the documents based on genre. Additional to the text, the original PDF pages are also added as images for potentially improving the quality of text.
|
74 |
+
|
75 |
+
The source data was made by humans, chiefly danish speaking authors, poets and playwrights.
|
76 |
+
|
77 |
+
### Data Extraction
|
78 |
+
|
79 |
+
#### Logic
|
80 |
+
|
81 |
+
The flowchart is for a broad understanding and is not a fully accurate representation.
|
82 |
+
|
83 |
+

|
84 |
+
|
85 |
+
The whole python script is provided for reference as [extract_data.py](./extract_data.py)
|
86 |
+
|
87 |
+
Made with:
|
88 |
+
|
89 |
+
- python 3.12.10
|
90 |
+
|
91 |
+
Required libraries for running:
|
92 |
+
|
93 |
+
- [PyMuPDF](https://pypi.org/project/PyMuPDF/) 1.26.0
|
94 |
+
- [datasets](https://pypi.org/project/datasets/) 3.5.0
|
95 |
+
|
96 |
+
|
97 |
+
## Additional Information
|
98 |
+
|
99 |
+
### Dataset Curators
|
100 |
+
***write something here***
|
101 |
+
### License
|
102 |
The documents in the dataset are part of the [public domain](https://creativecommons.org/public-domain/)
|