Datasets:

Modalities:
Text
Formats:
parquet
Languages:
English
ArXiv:
Libraries:
Datasets
pandas
License:
hpprc commited on
Commit
63921ff
·
verified ·
1 Parent(s): f24b5c9

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -2
README.md CHANGED
@@ -35,7 +35,7 @@ configs:
35
  path: data/test-*
36
  - split: all
37
  path: data/all-*
38
- license: cc-by-4.0
39
  task_categories:
40
  - text2text-generation
41
  language:
@@ -45,7 +45,6 @@ size_categories:
45
  - 100K<n<1M
46
  ---
47
 
48
-
49
  Preprocessed version of [WikiSplit](https://arxiv.org/abs/1808.09468).
50
  Since the [original WikiSplit dataset](https://huggingface.co/datasets/wiki_split) was tokenized and had some noises, we have used the [Moses detokenizer](https://github.com/moses-smt/mosesdecoder/blob/c41ff18111f58907f9259165e95e657605f4c457/scripts/tokenizer/detokenizer.perl) for detokenization and removed text fragments.
51
  For detailed information on the preprocessing steps, please see [here](https://github.com/nttcslab-nlp/wikisplit-pp/src/datasets/common.py).
 
35
  path: data/test-*
36
  - split: all
37
  path: data/all-*
38
+ license: cc-by-sa-4.0
39
  task_categories:
40
  - text2text-generation
41
  language:
 
45
  - 100K<n<1M
46
  ---
47
 
 
48
  Preprocessed version of [WikiSplit](https://arxiv.org/abs/1808.09468).
49
  Since the [original WikiSplit dataset](https://huggingface.co/datasets/wiki_split) was tokenized and had some noises, we have used the [Moses detokenizer](https://github.com/moses-smt/mosesdecoder/blob/c41ff18111f58907f9259165e95e657605f4c457/scripts/tokenizer/detokenizer.perl) for detokenization and removed text fragments.
50
  For detailed information on the preprocessing steps, please see [here](https://github.com/nttcslab-nlp/wikisplit-pp/src/datasets/common.py).