Update README.md
Browse files
    	
        README.md
    CHANGED
    
    | 
         @@ -1,6 +1,42 @@ 
     | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 1 | 
         
             
            # Quick Start
         
     | 
| 2 | 
         | 
| 3 | 
         
            -
             
     | 
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 4 | 
         | 
| 5 | 
         | 
| 6 | 
         
             
            ---
         
     | 
| 
         | 
|
| 1 | 
         
            +
            # ParaDocs
         
     | 
| 2 | 
         
            +
             
     | 
| 3 | 
         
            +
            Data availability limits the scope of any given task. 
         
     | 
| 4 | 
         
            +
            In machine translation, historical models were incapable of handling longer contexts, so the lack of document-level datasets was less noticeable. 
         
     | 
| 5 | 
         
            +
            Now, despite the emergence of long-sequence methods, we remain within a sentence-level paradigm and without data to adequately approach context-aware machine translation. 
         
     | 
| 6 | 
         
            +
            Most large-scale datasets have been processed through a pipeline that discards document-level metadata. 
         
     | 
| 7 | 
         
            +
             
     | 
| 8 | 
         
            +
            [ParaDocs](https://arxiv.org/abs/2406.03869) is a publicly available dataset that produces parallel annotations for the document-level metadata of three large publicly available corpora (ParaCrawl, Europal, and News Commentary) in many languages.
         
     | 
| 9 | 
         
            +
            Using this data and the following scripts, you can download parallel document contexts for the purpose of training context-aware machine translation systems.
         
     | 
| 10 | 
         
            +
             
     | 
| 11 | 
         
            +
            If you have questions about this data or use of the following scripts, please do not hesitate to contact the maintainer at [email protected].
         
     | 
| 12 | 
         
            +
             
     | 
| 13 | 
         
             
            # Quick Start
         
     | 
| 14 | 
         | 
| 15 | 
         
            +
            The scripts to download and process the data can be found [here](https://github.com/rewicks/ParaDocs/):
         
     | 
| 16 | 
         
            +
             
     | 
| 17 | 
         
            +
            Clone these scripts:
         
     | 
| 18 | 
         
            +
             
     | 
| 19 | 
         
            +
            ```
         
     | 
| 20 | 
         
            +
            git clone https://github.com/rewicks/ParaDocs.git
         
     | 
| 21 | 
         
            +
            ```
         
     | 
| 22 | 
         
            +
             
     | 
| 23 | 
         
            +
            From this directory, you can stream a specific language and split from huggingface with:
         
     | 
| 24 | 
         
            +
             
     | 
| 25 | 
         
            +
            ```
         
     | 
| 26 | 
         
            +
            paradocs/paradocs-hf --name en-de-strict --minimum_size 2 --frequency_cutoff 100 --lid_cutoff 0.5
         
     | 
| 27 | 
         
            +
            ```
         
     | 
| 28 | 
         
            +
             
     | 
| 29 | 
         
            +
            It may alternatively be faster to download the `*.gz` files of your desired split and then pipe them through the `paradocs/paradocs` file for filtering.
         
     | 
| 30 | 
         
            +
             
     | 
| 31 | 
         
            +
            ```
         
     | 
| 32 | 
         
            +
            zcat data/en-de/strict* | paradocs/paradocs --minimum_size 2 --frequency_cutoff 100 --lid_cutoff 0.5
         
     | 
| 33 | 
         
            +
            ```
         
     | 
| 34 | 
         
            +
             
     | 
| 35 | 
         
            +
            The filtering commandline arguments are explained in more detial in Section 3.2.
         
     | 
| 36 | 
         
            +
             
     | 
| 37 | 
         
            +
            ## The Paper
         
     | 
| 38 | 
         
            +
             
     | 
| 39 | 
         
            +
            If you use this dataset in your research. Please cite our [paper](https://arxiv.org/abs/2406.03869).
         
     | 
| 40 | 
         | 
| 41 | 
         | 
| 42 | 
         
             
            ---
         
     |