File size: 3,796 Bytes
46498a1 a18a58c 7113e44 faf5447 46498a1 7113e44 46498a1 eb09309 eb3b2f0 9ad7971 7113e44 2f3d802 eb09309 f32bca3 eb09309 f32bca3 eb09309 7113e44 2f3d802 7113e44 46498a1 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 |
---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 67523459
num_examples: 538896
- name: test
num_bytes: 1285789
num_examples: 9792
- name: validation
num_bytes: 1295645
num_examples: 9792
download_size: 20806553
dataset_size: 70104893
license: cc-by-sa-4.0
language:
- nl
tags:
- generic
- sentence similarity
pretty_name: Dutch translation of SNLI corpus with Maria NMT
size_categories:
- 100K<n<1M
task_categories:
- sentence-similarity
---
Information on the dataset:
```
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 67523459
num_examples: 538896
- name: test
num_bytes: 1285789
num_examples: 9792
- name: validation
num_bytes: 1295645
num_examples: 9792
download_size: 20806553
dataset_size: 70104893
```
# Dataset Card for "SNLI_Dutch_translated_with_Marianmt"
Translation of the **English** corpus [Stanford Natural Language Inference (SNLI)](https://nlp.stanford.edu/projects/snli/),
to **Dutch** using an [Maria NMT model](https://marian-nmt.github.io/), trained by [Helsinki NLP](https://huggingface.co/Helsinki-NLP/opus-mt-en-nl).
Note, for reference: Maria NMT is based on [BART](https://huggingface.co/docs/transformers/model_doc/bart), described [here](https://arxiv.org/abs/1910.13461).
A complete description of the dataset is given [here](https://huggingface.co/datasets/snli).
# Attribution
If you use this dataset please use the following to credit the creators of SNLI:
```citation
@inproceedings{snli:emnlp2015,
Author = {Bowman, Samuel R. and Angeli, Gabor and Potts, Christopher, and Manning, Christopher D.},
Booktitle = {Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP)},
Publisher = {Association for Computational Linguistics},
Title = {A large annotated corpus for learning natural language inference},
Year = {2015}
}
```
The creators of the OPUS-MT models:
```
@InProceedings{TiedemannThottingal:EAMT2020,
author = {J{\"o}rg Tiedemann and Santhosh Thottingal},
title = {{OPUS-MT} — {B}uilding open translation services for the {W}orld},
booktitle = {Proceedings of the 22nd Annual Conferenec of the European Association for Machine Translation (EAMT)},
year = {2020},
address = {Lisbon, Portugal}
}
```
and
```
@misc {van_es_2023,
author = { {Bram van Es} },
title = { SNLI_Dutch_translated_with_Marianmt (Revision 9ad7971) },
year = 2023,
url = { https://huggingface.co/datasets/UMCU/SNLI_Dutch_translated_with_Marianmt },
doi = { 10.57967/hf/1268 },
publisher = { Hugging Face }
}
```
# License
For both the Maria NMT model and the original [Helsinki NLP](https://twitter.com/HelsinkiNLP) [Opus MT model](https://huggingface.co/Helsinki-NLP)
we did **not** find a license, if this was in error please let us know and we will add the appropriate licensing promptly.
We adopt the licensing of the SNLI corpus: a [Creative Commons Attribution-ShareAlike 4.0 International License](http://creativecommons.org/licenses/by-sa/4.0/).
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |