cc100-latin / README.md
pstroe's picture
Create README.md
a83b707
|
raw
history blame
1.24 kB

Latin part of cc100 corpus

This dataset contains parts of the Latin part of the cc100 dataset. It was used to train a RoBERTa-based LM model with huggingface.

Preprocessing

I undertook the following preprocessing steps:

  • Removal of all "pseudo-Latin" text ("Lorem ipsum ...").
  • Use of CLTK for sentence splitting and normalisation.
  • Retaining only lines containing letters of the Latin alphabet, numerals, and certain punctuation (--> grep -P '^[A-z0-9ÄÖÜäöüÆ挜ᵫĀāūōŌ.,;:?!\- Ęę]+$' la.nolorem.tok.txt
  • deduplication of the corpus

The result is a corpus of ~390 million tokens.

Structure

The dataset is structured the following way:

{
  "train": {
              "text": "Solventibus autem illis pullum , dixerunt domini ejus ad illos : Quid solvitis pullum ?",
              "text": "Errare humanum est ."
              ...
           }
  "test":  {
              "text": "Alia iacta est ."
              ...
           }
}

Contact

For contact, reach out to Phillip Ströbel via mail or via Twitter.