Datasets:

Modalities:
Image
Text
Formats:
arrow
Libraries:
Datasets
License:
balsab commited on
Commit
1a61974
·
verified ·
1 Parent(s): 3a8290c

fix dataset config

Browse files
Files changed (1) hide show
  1. README.md +101 -96
README.md CHANGED
@@ -1,97 +1,102 @@
1
- ---
2
- license: cc0-1.0
3
- ---
4
-
5
- # KB-Books
6
-
7
-
8
- ## Dataset Description
9
-
10
-
11
- ### Dataset Summary
12
-
13
- Documents from the [Royal Danish Library](https://www.kb.dk/en) published between 1750 and 1930.
14
-
15
- The dataset has each page of each document in image and text format. The text was extracted with [OCR](https://en.wikipedia.org/wiki/Optical_character_recognition) at the time of digitization.
16
-
17
- The documents (books of various genres) were obtained from the library in .pdf format with additional metadata as .json files. The dataset was assembled to make these public domain Danish texts more accessible.
18
-
19
- ### Languages
20
-
21
- All texts are in Danish.
22
-
23
- ## Dataset Structure
24
-
25
- ### Data Instances
26
-
27
- ```
28
- {
29
- "doc_id": "unique document identifier",
30
- "page_id": "unique page identifier",
31
- "page_image":"image of the page, extracted from a pdf",
32
- "page_text": "OCRed text of the page, extracted from a pdf",
33
- "author" : "name of the author. If more than one, separated by ';' ",
34
- "title" : "document title",
35
- "published" : "year of publishing",
36
- "digitalized" : "year of processing the physical document by the library",
37
- "file_name" : "file_name of the original PDF"
38
- }
39
- ```
40
-
41
- The "page_text" was obtained through OCR, and is therefore likely to contain noisy data, especially in older documents, where the original text is either handwritten or printed in exaggerated fonts.
42
-
43
- "author" and "title" may be missing, especially in documents published before 1833.
44
-
45
- "digitalized" may be missing.
46
-
47
-
48
- ### Data Splits
49
-
50
- All data is in the "train" split.
51
- Data in [./data](./data/) is organized by year of publication, and is segmented into ~5GB chunks.
52
-
53
-
54
- ## Dataset Creation
55
-
56
- ### Curation Rationale
57
- The dataset makes public domain text data more accessible to whomever may wish to view it or use it.
58
-
59
- The dataset was created to be used mainly for research purposes and Natural Language Processing tasks.
60
-
61
- The documents were filtered to make sure no non-public domain data is added. See [pd_check.md](./pd_check/pd_check.md) for the confirming of public domain status and [scraping.md](./scrape/scraping.md) for collecting possible Danish authors.
62
-
63
- **IMPORTANT: In case non-public domain data is found in the dataset, please let us know**
64
-
65
- ### Source Data
66
-
67
- Data consists of OCRed documents from the [Royal Danish Library](https://www.kb.dk/en) published between 1750 and 1930.
68
- These documents are mostly books of various genres. No distinction was made among the documents based on genre. Additional to the text, the original PDF pages are also added as images for potentially improving the quality of text.
69
-
70
- The source data was made by humans, chiefly danish speaking authors, poets and playwrights.
71
-
72
- ### Data Extraction
73
-
74
- #### Logic
75
-
76
- The flowchart is for a broad understanding and is not a fully accurate representation.
77
-
78
- ![Logic flowchart](./imgs/extract_flowchart.jpg)
79
-
80
- The whole python script is provided for reference as [extract_data.py](./extract_data.py)
81
-
82
- Made with:
83
-
84
- - python 3.12.10
85
-
86
- Required libraries for running:
87
-
88
- - [PyMuPDF](https://pypi.org/project/PyMuPDF/) 1.26.0
89
- - [datasets](https://pypi.org/project/datasets/) 3.5.0
90
-
91
-
92
- ## Additional Information
93
-
94
- ### Dataset Curators
95
- ***write something here***
96
- ### License
 
 
 
 
 
97
  The documents in the dataset are part of the [public domain](https://creativecommons.org/public-domain/)
 
1
+ ---
2
+ license: cc0-1.0
3
+ configs:
4
+ - config_name: default
5
+ data_files:
6
+ - split: train
7
+ path: data/*/*.arrow
8
+ ---
9
+
10
+ # KB-Books
11
+
12
+
13
+ ## Dataset Description
14
+
15
+
16
+ ### Dataset Summary
17
+
18
+ Documents from the [Royal Danish Library](https://www.kb.dk/en) published between 1750 and 1930.
19
+
20
+ The dataset has each page of each document in image and text format. The text was extracted with [OCR](https://en.wikipedia.org/wiki/Optical_character_recognition) at the time of digitization.
21
+
22
+ The documents (books of various genres) were obtained from the library in .pdf format with additional metadata as .json files. The dataset was assembled to make these public domain Danish texts more accessible.
23
+
24
+ ### Languages
25
+
26
+ All texts are in Danish.
27
+
28
+ ## Dataset Structure
29
+
30
+ ### Data Instances
31
+
32
+ ```
33
+ {
34
+ "doc_id": "unique document identifier",
35
+ "page_id": "unique page identifier",
36
+ "page_image":"image of the page, extracted from a pdf",
37
+ "page_text": "OCRed text of the page, extracted from a pdf",
38
+ "author" : "name of the author. If more than one, separated by ';' ",
39
+ "title" : "document title",
40
+ "published" : "year of publishing",
41
+ "digitalized" : "year of processing the physical document by the library",
42
+ "file_name" : "file_name of the original PDF"
43
+ }
44
+ ```
45
+
46
+ The "page_text" was obtained through OCR, and is therefore likely to contain noisy data, especially in older documents, where the original text is either handwritten or printed in exaggerated fonts.
47
+
48
+ "author" and "title" may be missing, especially in documents published before 1833.
49
+
50
+ "digitalized" may be missing.
51
+
52
+
53
+ ### Data Splits
54
+
55
+ All data is in the "train" split.
56
+ Data in [./data](./data/) is organized by year of publication, and is segmented into ~5GB chunks.
57
+
58
+
59
+ ## Dataset Creation
60
+
61
+ ### Curation Rationale
62
+ The dataset makes public domain text data more accessible to whomever may wish to view it or use it.
63
+
64
+ The dataset was created to be used mainly for research purposes and Natural Language Processing tasks.
65
+
66
+ The documents were filtered to make sure no non-public domain data is added. See [pd_check.md](./pd_check/pd_check.md) for the confirming of public domain status and [scraping.md](./scrape/scraping.md) for collecting possible Danish authors.
67
+
68
+ **IMPORTANT: In case non-public domain data is found in the dataset, please let us know**
69
+
70
+ ### Source Data
71
+
72
+ Data consists of OCRed documents from the [Royal Danish Library](https://www.kb.dk/en) published between 1750 and 1930.
73
+ These documents are mostly books of various genres. No distinction was made among the documents based on genre. Additional to the text, the original PDF pages are also added as images for potentially improving the quality of text.
74
+
75
+ The source data was made by humans, chiefly danish speaking authors, poets and playwrights.
76
+
77
+ ### Data Extraction
78
+
79
+ #### Logic
80
+
81
+ The flowchart is for a broad understanding and is not a fully accurate representation.
82
+
83
+ ![Logic flowchart](./imgs/extract_flowchart.jpg)
84
+
85
+ The whole python script is provided for reference as [extract_data.py](./extract_data.py)
86
+
87
+ Made with:
88
+
89
+ - python 3.12.10
90
+
91
+ Required libraries for running:
92
+
93
+ - [PyMuPDF](https://pypi.org/project/PyMuPDF/) 1.26.0
94
+ - [datasets](https://pypi.org/project/datasets/) 3.5.0
95
+
96
+
97
+ ## Additional Information
98
+
99
+ ### Dataset Curators
100
+ ***write something here***
101
+ ### License
102
  The documents in the dataset are part of the [public domain](https://creativecommons.org/public-domain/)