|
--- |
|
dataset_info: |
|
features: |
|
- name: id |
|
dtype: int64 |
|
- name: complex |
|
dtype: string |
|
- name: simple |
|
dtype: string |
|
- name: split |
|
dtype: string |
|
splits: |
|
- name: train |
|
num_bytes: 309170607 |
|
num_examples: 795585 |
|
- name: validation |
|
num_bytes: 38667164 |
|
num_examples: 99448 |
|
- name: test |
|
num_bytes: 38650132 |
|
num_examples: 99448 |
|
- name: all |
|
num_bytes: 386487903 |
|
num_examples: 994481 |
|
download_size: 540598777 |
|
dataset_size: 772975806 |
|
configs: |
|
- config_name: default |
|
data_files: |
|
- split: train |
|
path: data/train-* |
|
- split: validation |
|
path: data/validation-* |
|
- split: test |
|
path: data/test-* |
|
- split: all |
|
path: data/all-* |
|
license: cc-by-sa-4.0 |
|
task_categories: |
|
- text2text-generation |
|
language: |
|
- en |
|
pretty_name: WikiSplit |
|
size_categories: |
|
- 100K<n<1M |
|
--- |
|
|
|
Preprocessed version of [WikiSplit](https://arxiv.org/abs/1808.09468). |
|
Since the [original WikiSplit dataset](https://huggingface.co/datasets/wiki_split) was tokenized and had some noises, we have used the [Moses detokenizer](https://github.com/moses-smt/mosesdecoder/blob/c41ff18111f58907f9259165e95e657605f4c457/scripts/tokenizer/detokenizer.perl) for detokenization and removed text fragments. |
|
For detailed information on the preprocessing steps, please see [here](https://github.com/nttcslab-nlp/wikisplit-pp/src/datasets/common.py). |
|
This preprocessed dataset serves as the basis for [WikiSplit++](https://huggingface.co/datasets/cl-nagoya/wikisplit-pp). |
|
|
|
## Dataset Description |
|
|
|
- **Repository:** https://github.com/nttcslab-nlp/wikisplit-pp |
|
- **Used Paper:** https://arxiv.org/abs/2404.09002 |