parquet-converter commited on
Commit
121a430
1 Parent(s): 1df6ec7

Update parquet files

Browse files
README.md DELETED
@@ -1,33 +0,0 @@
1
- ## Latin part of cc100 corpus
2
- This dataset contains parts of the Latin part of the [cc100](http://data.statmt.org/cc-100/) dataset. It was used to train a [RoBERTa-based LM model](https://huggingface.co/pstroe/roberta-base-latin-cased) with huggingface.
3
-
4
- ### Preprocessing
5
-
6
- I undertook the following preprocessing steps:
7
-
8
- - Removal of all "pseudo-Latin" text ("Lorem ipsum ...").
9
- - Use of [CLTK](http://www.cltk.org) for sentence splitting and normalisation.
10
- - Retaining only lines containing letters of the Latin alphabet, numerals, and certain punctuation (--> `grep -P '^[A-z0-9ÄÖÜäöüÆ挜ᵫĀāūōŌ.,;:?!\- Ęę]+$' la.nolorem.tok.txt`
11
- - deduplication of the corpus
12
-
13
- The result is a corpus of ~390 million tokens.
14
-
15
- ### Structure
16
- The dataset is structured the following way:
17
- ```
18
- {
19
- "train": {
20
- "text": "Solventibus autem illis pullum , dixerunt domini ejus ad illos : Quid solvitis pullum ?",
21
- "text": "Errare humanum est ."
22
- ...
23
- }
24
- "test": {
25
- "text": "Alia iacta est ."
26
- ...
27
- }
28
- }
29
- ```
30
-
31
- ### Contact
32
-
33
- For contact, reach out to Phillip Ströbel [via mail](mailto:[email protected]) or [via Twitter](https://twitter.com/CLingophil).
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
la.nolorem.tok.latalphabetonly.v2.json → pstroe--cc100-latin/json-train.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:0b671472377d6280e518c8fe5e63078a2c3fe34cfed3437ae1e16f6e13f74290
3
- size 1267711725
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d35d5bf2a84fe4c7f8d4e7107375359560d018c738486169d9f7f929b3e3e5f5
3
+ size 881526558