File size: 1,637 Bytes
61eae09 757de92 61eae09 757de92 63921ff 0f33f53 f24b5c9 aacff1c |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 |
---
dataset_info:
features:
- name: id
dtype: int64
- name: complex
dtype: string
- name: simple
dtype: string
- name: split
dtype: string
splits:
- name: train
num_bytes: 309170607
num_examples: 795585
- name: validation
num_bytes: 38667164
num_examples: 99448
- name: test
num_bytes: 38650132
num_examples: 99448
- name: all
num_bytes: 386487903
num_examples: 994481
download_size: 540598777
dataset_size: 772975806
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
- split: all
path: data/all-*
license: cc-by-sa-4.0
task_categories:
- text2text-generation
language:
- en
pretty_name: WikiSplit
size_categories:
- 100K<n<1M
---
Preprocessed version of [WikiSplit](https://arxiv.org/abs/1808.09468).
Since the [original WikiSplit dataset](https://huggingface.co/datasets/wiki_split) was tokenized and had some noises, we have used the [Moses detokenizer](https://github.com/moses-smt/mosesdecoder/blob/c41ff18111f58907f9259165e95e657605f4c457/scripts/tokenizer/detokenizer.perl) for detokenization and removed text fragments.
For detailed information on the preprocessing steps, please see [here](https://github.com/nttcslab-nlp/wikisplit-pp/src/datasets/common.py).
This preprocessed dataset serves as the basis for [WikiSplit++](https://huggingface.co/datasets/cl-nagoya/wikisplit-pp).
## Dataset Description
- **Repository:** https://github.com/nttcslab-nlp/wikisplit-pp
- **Used Paper:** https://arxiv.org/abs/2404.09002 |