Datasets:
Update README.md
Browse files
README.md
CHANGED
@@ -25,25 +25,23 @@ The dataset contains 653,983 OCR texts (~ 200 million pages) from various collec
|
|
25 |
In order to reliably find public domain books among the IA collections, the dataset was curated by combining three approaches:
|
26 |
1. Manually identifying IA collections which expliclity state that they exclusively contain public domain materials, e.g. the [Cornell University Library collection](https://archive.org/details/cornell/about?tab=about) and downloading them in bulk.
|
27 |
2. Using the [possible-copyright-status](https://archive.org/developers/metadata-schema/index.html#possible-copyright-status) query parameter to search for items with the status `NOT_IN_COPYRIGHT` across all IA collections using the [IA Search API](https://archive.org/help/aboutsearch.htm).
|
28 |
-
3. Restricting all searches with the query parameter `openlibrary_edition:*` to ensure that all returned items posses an OpenLibrary record, i.e. to ensure that they are books and not some other form of text.
|
29 |
|
30 |
## Size
|
31 |
|
32 |
-
The size of the full uncompressed dataset is ~400GB and compressed
|
33 |
|
34 |
## Metadata
|
35 |
|
36 |
-
The book texts are accompanied by basic metadata fields such as title, author and publication year (see [Data Fields](#data-fields)
|
37 |
|
38 |
## Languages
|
39 |
|
40 |
-
Every book in this collection has been classified as having English as its primary language by the IA during the OCR process. A small number of books might also have other languages mixed in.
|
41 |
-
|
42 |
-
In the future, more datasets will be compiled for other languages using the same methodology.
|
43 |
|
44 |
## OCR
|
45 |
|
46 |
-
The OCR for
|
47 |
|
48 |
## Data fields
|
49 |
|
@@ -60,4 +58,4 @@ The OCR for most of the books was produced by the IA. You can learn more about t
|
|
60 |
|
61 |
## License
|
62 |
|
63 |
-
The full texts of the works included in this dataset are presumed to be in the public domain by the institutions who have contributed them to the collections of the Internet Archive. The dataset itself is licensed under the [CC0 license](https://creativecommons.org/public-domain/cc0/).
|
|
|
25 |
In order to reliably find public domain books among the IA collections, the dataset was curated by combining three approaches:
|
26 |
1. Manually identifying IA collections which expliclity state that they exclusively contain public domain materials, e.g. the [Cornell University Library collection](https://archive.org/details/cornell/about?tab=about) and downloading them in bulk.
|
27 |
2. Using the [possible-copyright-status](https://archive.org/developers/metadata-schema/index.html#possible-copyright-status) query parameter to search for items with the status `NOT_IN_COPYRIGHT` across all IA collections using the [IA Search API](https://archive.org/help/aboutsearch.htm).
|
28 |
+
3. Restricting all IA searches with the query parameter `openlibrary_edition:*` to ensure that all returned items posses an OpenLibrary record, i.e. to ensure that they are books and not some other form of text.
|
29 |
|
30 |
## Size
|
31 |
|
32 |
+
The size of the full uncompressed dataset is ~400GB and the compressed Parquet files are ~220GB in total. Each of the 327 Parquet file contains a maximum of 2000 books.
|
33 |
|
34 |
## Metadata
|
35 |
|
36 |
+
The book texts are accompanied by basic metadata fields such as title, author and publication year, as well as IA and OL identifiers (see [Data Fields](#data-fields)). The metadata can be expanded with more information about subjects, authors, file details etc. by using the [OL API](https://openlibrary.org/developers/api), [OL Data Dumps](https://openlibrary.org/developers/dumps) and the [IA Metadata API](https://archive.org/developers/md-read.html).
|
37 |
|
38 |
## Languages
|
39 |
|
40 |
+
Every book in this collection has been classified as having English as its primary language by the IA during the OCR process. A small number of books might also have other languages mixed in. In the future, more datasets will be compiled for other languages using the same methodology.
|
|
|
|
|
41 |
|
42 |
## OCR
|
43 |
|
44 |
+
The OCR for the books was produced by the IA. You can learn more about the details of the IA OCR process here: https://archive.org/developers/ocr.html. The OCR quality varies from book to book. Future versions of this dataset might include OCR quality scores or even texts corrected post-OCR using LLMs.
|
45 |
|
46 |
## Data fields
|
47 |
|
|
|
58 |
|
59 |
## License
|
60 |
|
61 |
+
The full texts of the works included in this dataset are presumed to be in the public domain and free of known copyrights by the institutions who have contributed them to the collections of the Internet Archive. The dataset itself is licensed under the [CC0 license](https://creativecommons.org/public-domain/cc0/).
|