gsarti commited on
Commit
ff4e69e
·
verified ·
1 Parent(s): f1a019c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +449 -437
README.md CHANGED
@@ -1,438 +1,450 @@
1
- ---
2
- language:
3
- - en
4
- - it
5
- - nl
6
- license:
7
- - apache-2.0
8
- tags:
9
- - machine-translation
10
- - quality-estimation
11
- - post-editing
12
- - translation
13
- - behavioral-data
14
- - multidimensional-quality-metric
15
- - mqm
16
- - comet
17
- - qe
18
- language_creators:
19
- - machine-generated
20
- - expert-generated
21
- annotations_creators:
22
- - machine-generated
23
- pretty_name: qe4pe
24
- size_categories:
25
- - 10K<n<100K
26
- source_datasets:
27
- - Unbabel/TowerEval-Data-v0.1
28
- task_categories:
29
- - translation
30
- configs:
31
- - config_name: main
32
- data_files:
33
- - split: train
34
- path: task/main/processed_main.csv
35
- - config_name: pretask
36
- data_files:
37
- - split: train
38
- path: task/pretask/processed_pretask.csv
39
- - config_name: posttask
40
- data_files:
41
- - split: train
42
- path: task/posttask/processed_posttask.csv
43
- - config_name: pretask_questionnaire
44
- data_files:
45
- - split: train
46
- path: questionnaires/pretask_results.csv
47
- - config_name: posttask_highlight_questionnaire
48
- data_files:
49
- - split: train
50
- path: questionnaires/posttask_highlight_results.csv
51
- - config_name: posttask_no_highlight_questionnaire
52
- data_files:
53
- - split: train
54
- path: questionnaires/posttask_no_highlight_results.csv
55
- ---
56
-
57
- # Quality Estimation for Post-Editing (QE4PE)
58
-
59
- *For more details on QE4PE, see our [paper](TBD) and our [Github repository](https://github.com/gsarti/qe4pe)*
60
-
61
- ## Dataset Description
62
- - **Source:** [Github](https://github.com/gsarti/qe4pe)
63
- - **Paper:** [Arxiv](TBD)
64
- - **Point of Contact:** [Gabriele Sarti](mailto:[email protected])
65
-
66
- [Gabriele Sarti](https://gsarti.com) • [Vilém Zouhar](https://vilda.net/) • [Malvina Nissim](https://malvinanissim.github.io/) • [Grzegorz Chrupała](https://grzegorz.chrupala.me/) • [Ana Guerberof Arenas](https://scholar.google.com/citations?user=i6bqaTsAAAAJ) • [Arianna Bisazza](https://www.cs.rug.nl/~bisazza/)
67
-
68
-
69
- <img src="TBD" alt="TBD" width="600"/>
70
-
71
- >Abstract TBD
72
-
73
- ### Dataset Summary
74
-
75
- This dataset provides a convenient access to the processed `pretask`, `main` and `posttask` splits and the questionnaires for the QE4PE study. A sample of challenging documents extracted from WMT23 evaluation data were machine translated from English to Italian and Dutch using [NLLB 3.3B](https://huggingface.co/facebook/nllb-200-3.3B), and post-edited by 12 translators per direction across 4 highlighting modalities employing various word-level quality estimation (QE) strategies to present translators with potential errors during the editing. Additional details are provided in the [main task readme](./task/main/README.md) and in our paper. During the post-editing, behavioral data (keystrokes, pauses and editing times) were collected using the [GroTE](https://github.com/gsarti/grote) online platform. For the main task, a subset of the data was annotated with Multidimensional Quality Metrics (MQM) by professional annotators.
76
-
77
- We publicly release the granular editing logs alongside the processed dataset to foster new research on the usability of word-level QE strategies in modern post-editing workflows.
78
-
79
- ### News 📢
80
-
81
- **January 2025**: MQM annotations are now available for the `main` task.
82
-
83
- **October 2024**: The QE4PE dataset is released on the HuggingFace Hub! 🎉
84
-
85
- ### Repository Structure
86
-
87
- The repository is organized as follows:
88
-
89
- ```shell
90
- qe4pe/
91
- ├── questionnaires/ # Configs and results for pre- and post-task questionnaires for translators
92
- │ ├── pretask_results.csv # Results of the pretask questionnaire, corresponding to the `pretask_questionnaire` configuration
93
- │ ├── posttask_highlight_results.csv # Results of the posttask questionnaire for highlighted modalities, corresponding to the `posttask_highlight_questionnaire` configuration
94
- │ ├── posttask_no_highlight_results.csv # Results of the posttask questionnaire for the `no_highlight` modality, corresponding to the `posttask_no_highlight_questionnaire` configuration
95
- │ └── ... # Configurations reporting the exact questionnaires questions and options.
96
- ├── setup/
97
- │ ├── highlights/ # Outputs of word-level QE strategies used to setup highlighted spans in the tasks
98
- │ ├── mqm/ # MQM annotations for the main task
99
- ├── processed/ # Intermediate outputs of the selection process for the main task
100
- │ └── wmt23/ # Original collection of WMT23 sources and machine-translated outputs
101
- └── task/
102
- ├── example/ # Example folder with task structure
103
- ├── main/ # Main task data, logs, outputs and guidelines
104
- ├── ...
105
- │ ├── processed_main.csv # Processed main task data, corresponds to the `main` configuration
106
- │ └── README.md # Details about the main task
107
- ├── posttask/ # Posttask task data, logs, outputs and guidelines
108
- │ ├── ...
109
- │ ├── processed_main.csv # Processed posttask task data, corresponds to the `posttask` configuration
110
- │ └── README.md # Details about the post-task
111
- └── pretask/ # Pretask data, logs, outputs and guidelines
112
- ├── ...
113
- ├── processed_pretask.csv # Processed pretask data, corresponds to the `pretask` configuration
114
- └── README.md # Details about the pretask
115
- ```
116
-
117
- ### Languages
118
-
119
- The language data of QE4PE is in English (BCP-47 `en`), Italian (BCP-47 `it`) and Dutch (BCP-47 `nl`).
120
-
121
- ## Dataset Structure
122
-
123
- ### Data Instances
124
-
125
- The dataset contains two configurations, corresponding to the two tasks: `pretask`, `main` and `posttask`. `main` contains the full data collected during the main task and analyzed during our experiments. `pretask` contains the data collected in the initial verification phase before the main task, in which all translators worked on texts highlighted in the `supervised` modality. `posttask` contains the data collected in the final phase in which all translators worked on texts in the `no_highlight` modality.
126
-
127
- ### Data Fields
128
-
129
- A single entry in the dataframe represents a segment (~sentence) in the dataset, that was machine-translated and post-edited by a professional translator. The following fields are contained in the training set:
130
-
131
- |Field |Description |
132
- |------------------------|-------------------------------------------------------------------------------------------------------------------------------------|
133
- | **Identification** | |
134
- |`unit_id` | The full entry identifier. Format: `qe4pe-{task_id}-{src_lang}-{tgt_lang}-{doc_id}-{segment_in_doc_id}-{translator_main_task_id}`. |
135
- |`wmt_id` | Identifier of the sentence in the original [WMT23](./data/setup/wmt23/wmttest2023.eng.jsonl) dataset. |
136
- |`wmt_category` | Category of the document: `biomedical` or `social` |
137
- |`doc_id` | The index of the document in the current configuration of the QE4PE dataset containing the current segment. |
138
- |`segment_in_doc_id` | The index of the segment inside the current document. |
139
- |`segment_id` | The index of the segment in the current configurations (i.e. concatenating all segments from all documents in order) |
140
- |`translator_pretask_id` | The identifier for the translator according to the `pretask` format before modality assignments: `tXX`. |
141
- |`translator_main_id` | The identifier for the translator according to the `main` task format after modality assignments: `{highlight_modality}_tXX`. |
142
- |`src_lang` | The source language of the segment. For QE4PE, this is always English (`eng`) |
143
- |`tgt_lang` | The target language of the segment: either Italian (`ita`) or Dutch (`nld`). |
144
- |`highlight_modality` | The highlighting modality used for the segment. Values: `no_highlight`, `oracle`, `supervised`, `unsupervised`. |
145
- | **Text statistics** | |
146
- |`src_num_chars` | Length of the source segment in number of characters. |
147
- |`mt_num_chars` | Length of the machine-translated segment in number of characters. |
148
- |`pe_num_chars` | Length of the post-edited segment in number of characters. |
149
- |`src_num_words` | Length of the source segment in number of words. |
150
- |`mt_num_words` | Length of the machine-translated segment in number of words. |
151
- |`pe_num_words` | Length of the post-edited segment in number of words. |
152
- |`num_minor_highlighted_chars` | Number of characters highlighted as minor errors in the machine-translated text. |
153
- |`num_major_highlighted_chars` | Number of characters highlighted as major errors in the machine-translated text. |
154
- |`num_minor_highlighted_words` | Number of words highlighted as minor errors in the machine-translated text. |
155
- |`num_major_highlighted_words` | Number of words highlighted as major errors in the machine-translated text. |
156
- | **Edits statistics** | |
157
- |`num_words_insert` | Number of post-editing insertions computed using [jiwer](https://github.com/jitsi/jiwer). |
158
- |`num_words_delete` | Number of post-editing deletions computed using [jiwer](https://github.com/jitsi/jiwer). |
159
- |`num_words_substitute` | Number of post-editing substitutions computed using [jiwer](https://github.com/jitsi/jiwer). |
160
- |`num_words_unchanged` | Number of post-editing hits computed using [jiwer](https://github.com/jitsi/jiwer). |
161
- |`tot_words_edits` | Total of all edit types for the sentence. |
162
- |`wer` | Word Error Rate score computed between `mt_text` and `pe_text` using [jiwer](https://github.com/jitsi/jiwer). |
163
- |`num_chars_insert` | Number of post-editing insertions computed using [jiwer](https://github.com/jitsi/jiwer). |
164
- |`num_chars_delete` | Number of post-editing deletions computed using [jiwer](https://github.com/jitsi/jiwer). |
165
- |`num_chars_substitute` | Number of post-editing substitutions computed using [jiwer](https://github.com/jitsi/jiwer). |
166
- |`num_chars_unchanged` | Number of post-editing hits computed using [jiwer](https://github.com/jitsi/jiwer). |
167
- |`tot_chars_edits` | Total of all edit types for the sentence. |
168
- |`cer` | Character Error Rate score computed between `mt_text` and `pe_text` using [jiwer](https://github.com/jitsi/jiwer). |
169
- | **Translation quality**| |
170
- |`mt_bleu_max` | Max BLEU score between `mt_text` and all `pe_text` for the corresponding segment using SacreBLEU with default parameters. |
171
- |`mt_bleu_min` | Min BLEU score between `mt_text` and all `pe_text` for the corresponding segment using SacreBLEU with default parameters. |
172
- |`mt_bleu_mean` | Mean BLEU score between `mt_text` and all `pe_text` for the corresponding segment using SacreBLEU with default parameters. |
173
- |`mt_bleu_std` | Standard deviation of BLEU scores between `mt_text` and all `pe_text` for the corresponding segment using SacreBLEU with default parameters. |
174
- |`mt_chrf_max` | Max chrF score between `mt_text` and all `pe_text` for the corresponding segment using SacreBLEU with default parameters. |
175
- |`mt_chrf_min` | Min chrF score between `mt_text` and all `pe_text` for the corresponding segment using SacreBLEU with default parameters. |
176
- |`mt_chrf_mean` | Mean chrF score between `mt_text` and all `pe_text` for the corresponding segment using SacreBLEU with default parameters. |
177
- |`mt_chrf_std` | Standard deviation of chrF scores between `mt_text` and all `pe_text` for the corresponding segment using SacreBLEU with default parameters. |
178
- |`mt_ter_max` | Max TER score between `mt_text` and all `pe_text` for the corresponding segment using SacreBLEU with default parameters. |
179
- |`mt_ter_min` | Min TER score between `mt_text` and all `pe_text` for the corresponding segment using SacreBLEU with default parameters. |
180
- |`mt_ter_mean` | Mean TER score between `mt_text` and all `pe_text` for the corresponding segment using SacreBLEU with default parameters. |
181
- |`mt_ter_std` | Standard deviation of TER scores between `mt_text` and all `pe_text` for the corresponding segment using SacreBLEU with default parameters. |
182
- |`mt_comet_max` | Max COMET sentence-level score for the `mt_text` and all `pe_text` for the corresponding segment using `Unbabel/wmt22-comet-da` with default parameters. |
183
- |`mt_comet_min` | Min COMET sentence-level score for the `mt_text` and all `pe_text` for the corresponding segment using `Unbabel/wmt22-comet-da` with default parameters. |
184
- |`mt_comet_mean` | Mean COMET sentence-level score for the `mt_text` and all `pe_text` for the corresponding segment using `Unbabel/wmt22-comet-da` with default parameters.|
185
- |`mt_comet_std` | Standard deviation of COMET sentence-level scores for the `mt_text` and all `pe_text` for the corresponding segment using `Unbabel/wmt22-comet-da` with default parameters. |
186
- |`mt_xcomet_qe` | `Unbabel/XCOMET-XXL` sentence-level quality estimation score for the mt_text. |
187
- |`mt_xcomet_errors` | List of error spans detected by `Unbabel/XCOMET-XXL` for the mt_text. |
188
- |`pe_bleu_max` | Max BLEU score between `pe_text` and all other `pe_text` for the corresponding segment using SacreBLEU with default parameters. |
189
- |`pe_bleu_min` | Min BLEU score between `pe_text` and all other `pe_text` for the corresponding segment using SacreBLEU with default parameters. |
190
- |`pe_bleu_mean` | Mean BLEU score between `pe_text` and all other `pe_text` for the corresponding segment using SacreBLEU with default parameters. |
191
- |`pe_bleu_std` | Standard deviation of BLEU scores between `pe_text` and all other `pe_text` for the corresponding segment using SacreBLEU with default parameters. |
192
- |`pe_chrf_max` | Max chrF score between `pe_text` and all other `pe_text` for the corresponding segment using SacreBLEU with default parameters. |
193
- |`pe_chrf_min` | Min chrF score between `pe_text` and all other `pe_text` for the corresponding segment using SacreBLEU with default parameters. |
194
- |`pe_chrf_mean` | Mean chrF score between `pe_text` and all other `pe_text` for the corresponding segment using SacreBLEU with default parameters. |
195
- |`pe_chrf_std` | Standard deviation of chrF scores between `pe_text` and all other `pe_text` for the corresponding segment using SacreBLEU with default parameters. |
196
- |`pe_ter_max` | Max TER score between `pe_text` and all other `pe_text` for the corresponding segment using SacreBLEU with default parameters. |
197
- |`pe_ter_min` | Min TER score between `pe_text` and all other `pe_text` for the corresponding segment using SacreBLEU with default parameters. |
198
- |`pe_ter_mean` | Mean TER score between `pe_text` and all other `pe_text` for the corresponding segment using SacreBLEU with default parameters. |
199
- |`pe_ter_std` | Standard deviation of TER scores between `pe_text` and all other `pe_text` for the corresponding segment using SacreBLEU with default parameters. |
200
- |`pe_comet_max` | Max COMET sentence-level score for the `pe_text` and all other `pe_text` for the corresponding segment using `Unbabel/wmt22-comet-da` with default parameters. |
201
- |`pe_comet_min` | Min COMET sentence-level score for the `pe_text` and all other `pe_text` for the corresponding segment using `Unbabel/wmt22-comet-da` with default parameters. |
202
- |`pe_comet_mean` | Mean COMET sentence-level score for the `pe_text` and all other `pe_text` for the corresponding segment using `Unbabel/wmt22-comet-da` with default parameters.|
203
- |`pe_comet_std` | Standard deviation of COMET sentence-level scores for the `pe_text` and all other `pe_text` for the corresponding segment using Unbabel/wmt22-comet-da with default parameters. |
204
- |`pe_xcomet_qe` | `Unbabel/XCOMET-XXL` sentence-level quality estimation score for the pe_text. |
205
- |`pe_xcomet_errors` | List of error spans detected by `Unbabel/XCOMET-XXL` for the pe_text. |
206
- | **Behavioral data** | |
207
- |`doc_num_edits` | Total number of edits performed by the translator on the current document. Only the last edit outputs are considered valid. |
208
- |`doc_edit_order` | Index corresponding to the current document edit order. If equal to `doc_id`, the document was edited in the given order. |
209
- |`doc_edit_time` | Total editing time for the current document in seconds (from `start` to `end`, no times ignored) |
210
- |`doc_edit_time_filtered`| Total editing time for the current document in seconds (from `start` to `end`, >5m pauses between logged actions ignored) |
211
- |`doc_keys_per_min` | Keystrokes per minute computed for the current document using `doc_edit_time_filtered`. |
212
- |`doc_chars_per_min` | Characters per minute computed for the current document using `doc_edit_time_filtered`. |
213
- |`doc_words_per_min` | Words per minute computed for the current document using `doc_edit_time_filtered`. |
214
- |`segment_num_edits` | Total number of edits performed by the translator on the current segment. Only edits for the last edit of the doc are considered valid. |
215
- |`segment_edit_order` | Index corresponding to the current segment edit order (only first `enter` action counts). If equal to `segment_in_doc_id`, the segment was edited in the given order. |
216
- |`segment_edit_time` | Total editing time for the current segment in seconds (summed time between `enter`-`exit` blocks) |
217
- |`segment_edit_time_filtered` | Total editing time for the current segment in seconds (>5m pauses between logged actions ignored). |
218
- |`segment_keys_per_min` | Keystrokes per minute computed for the current segment using `segment_edit_time_filtered`. |
219
- |`segment_chars_per_min` | Characters per minute computed for the current segment using `segment_edit_time_filtered`. |
220
- |`segment_words_per_min` | Words per minute computed for the current segment using `segment_edit_time_filtered`. |
221
- |`num_enter_actions` | Number of `enter` actions (focus on textbox) performed by the translator on the current segment during post-editing. |
222
- |`remove_highlights` | If True, the Clear Highlights button was pressed for this segment (always false for `no_highlight` modality). |
223
- |**Texts and annotations**| |
224
- |`src_text` | The original source segment from WMT23 requiring translation. |
225
- |`mt_text` | Output of the `NLLB-3.3B` model when translating `src_text` into `tgt_lang` (default config, 5 beams) |
226
- |`mt_text_highlighted` | Highlighted version of `mt_text` with potential errors according to the `highlight_modality`. |
227
- |`pe_text` | Post-edited version of `mt_text` produced by a professional translator with `highlight_modality`. |
228
- |`mt_pe_word_aligned` | Aligned visual representation of word-level edit operations (I = Insertion, D = Deletion, S = Substitution) (replace `\\n` with `\n` to show the three aligned rows). |
229
- |`mt_pe_char_aligned` | Aligned visual representation of character-level edit operations (I = Insertion, D = Deletion, S = Substitution) (replace `\\n` with `\n` to show the three aligned rows). |
230
- |`highlights` | List of dictionaries for highlighted spans with error severity and position, matching XCOMET format for word-level error annotations. |
231
- |**MQM annotations (`main` config only)**| |
232
- |`qa_mt_annotator_id` | Annotator ID for the MQM evaluation of `qa_mt_annotated_text`. |
233
- |`qa_pe_annotator_id` | Annotator ID for the MQM evaluation of `qa_pe_annotated_text`. |
234
- |`qa_mt_esa_rating` | 0-100 quality rating for the `qa_mt_annotated_text` translation, following the [ESA framework](https://aclanthology.org/2024.wmt-1.131/). |
235
- |`qa_pe_esa_rating` | 0-100 quality rating for the `qa_pe_annotated_text` translation, following the [ESA framework](https://aclanthology.org/2024.wmt-1.131/). |
236
- |`qa_mt_annotated_text` | Version of `mt_text` annotated with MQM errors. Might differ (only slightly) from `mt_text`, included since `qa_mt_mqm_errors` indices are computed on this string. |
237
- |`qa_pe_annotated_text` | Version of `pe_text` annotated with MQM errors. Might differ (only slightly) from `pe_text`, included since `qa_pe_mqm_errors` indices are computed on this string. |
238
- |`qa_mt_fixed_text` | Proposed correction of `mqm_mt_annotated_text` following MQM annotation. |
239
- |`qa_pe_fixed_text` | Proposed correction of `mqm_pe_annotated_text` following MQM annotation. |
240
- |`qa_mt_mqm_errors` | List of error spans detected by the MQM annotator for the `qa_mt_annotated_text`. Each error span dictionary contains the following fields: `text`: the span in `mqm_mt_annotated_text` containing an error. `text_start`: the start index of the error span in `qa_mt_annotated_text`. -1 if no annotated span is present (e.g. for omissions) `text_end`: the end index of the error span in `qa_mt_annotated_text`. -1 if no annotated span is present (e.g. for omissions) `correction`: the proposed correction in `qa_mt_fixed_text` for the error span in `qa_mt_annotated_text`. `correction_start`: the start index of the error span in `mqm_mt_fixed_text`. -1 if no corrected span is present (e.g. for additions) `correction_end`: the end index of the error span in `qa_mt_fixed_text`. -1 if no corrected span is present (e.g. for additions) `description`: an optional error description provided by the annotator. `mqm_category`: the error category assigned by the annotator for the current span. One of: Addition, Omission, Mistranslation, Inconsistency, Untranslated, Punctuation, Spelling, Grammar, Inconsistent Style, Readability, Wrong Register. `severity`: the error severity for the current span. One of: Minor, Major, Neutral. `comment`: an optional comment provided by the annotator for the current span. `edit_order`: index of the edit in the current segment edit order (starting from 1). |
241
- |`qa_pe_mqm_errors` | List of error spans detected by the MQM annotator for the `qa_pe_annotated_text`. Each error span dictionary contains the following fields: `text`: the span in `qa_pe_annotated_text` containing an error. `text_start`: the start index of the error span in `qa_pe_annotated_text`. -1 if no annotated span is present (e.g. for omissions) `text_end`: the end index of the error span in `qa_pe_annotated_text`. -1 if no annotated span is present (e.g. for omissions) `correction`: the proposed correction in `qa_pe_fixed_text` for the error span in `qa_pe_annotated_text`. `correction_start`: the start index of the error span in `qa_pe_fixed_text`. -1 if no corrected span is present (e.g. for additions) `correction_end`: the end index of the error span in `qa_pe_fixed_text`. -1 if no corrected span is present (e.g. for additions) `description`: an optional error description provided by the annotator. `mqm_category`: the error category assigned by the annotator for the current span. One of: Addition, Omission, Mistranslation, Inconsistency, Untranslated, Punctuation, Spelling, Grammar, Inconsistent Style, Readability, Wrong Register. `severity`: the error severity for the current span. One of: Minor, Major, Neutral. `comment`: an optional comment provided by the annotator for the current span. `edit_order`: index of the edit in the current segment edit order (starting from 1). |
242
-
243
- ### Data Splits
244
-
245
- |`config` | `split`| |
246
- |------------------------------------:|-------:|--------------------------------------------------------------:|
247
- |`main` | `train`| 8100 (51 docs i.e. 324 sents x 25 translators) |
248
- |`pretask` | `train`| 950 (6 docs i.e. 38 sents x 25 translators) |
249
- |`posttask` | `train`| 1200 (8 docs i.e. 50 sents x 24 translators) |
250
- |`pretask_questionnaire` | `train`| 26 (all translators, including replaced/replacements) |
251
- |`posttask_highlight_questionnaire` | `train`| 19 (all translators for highlight modalities + 1 replacement) |
252
- |`posttask_no_highlight_questionnaire`| `train`| 6 (all translators for `no_highlight` modality) |
253
-
254
- #### Train Split
255
-
256
- The `train` split contains the totality of triplets (or pairs, when translation from scratch is performed) annotated with behavioral data produced during the translation.
257
-
258
- The following is an example of the subject `oracle_t1` post-editing for segment `3` of `doc20` in the `eng-nld` direction of the `main` task. The fields `mt_pe_word_aligned` and `mt_pe_char_aligned` are shown over three lines to provide a visual understanding of their contents.
259
-
260
- ```python
261
- {
262
- # Identification
263
- "unit_id": "qe4pe-main-eng-nld-20-3-oracle_t1",
264
- "wmt_id": "doc5",
265
- "wmt_category": "biomedical",
266
- "doc_id": 20,
267
- "segment_in_doc_id": 3,
268
- "segment_id": 129,
269
- "translator_pretask_id": "t4",
270
- "translator_main_id": "oracle_t1",
271
- "src_lang": "eng",
272
- "tgt_lang": "nld",
273
- "highlight_modality": "oracle",
274
- # Text statistics
275
- "src_num_chars": 104,
276
- "mt_num_chars": 136,
277
- "pe_num_chars": 106,
278
- "src_num_words": 15,
279
- "mt_num_words": 16,
280
- "pe_num_words": 16,
281
- # Edits statistics
282
- "num_words_insert": 0,
283
- "num_words_delete": 0,
284
- "num_words_substitute": 1,
285
- "num_words_unchanged": 15,
286
- "tot_words_edits": 1,
287
- "wer": 0.0625,
288
- "num_chars_insert": 0,
289
- "num_chars_delete": 0,
290
- "num_chars_substitute": 6,
291
- "num_chars_unchanged": 100,
292
- "tot_chars_edits": 6,
293
- "cer": 0.0566,
294
- # Translation quality
295
- "mt_bleu_max": 100.0,
296
- "mt_bleu_min": 7.159,
297
- "mt_bleu_mean": 68.687,
298
- "mt_bleu_std": 31.287,
299
- "mt_chrf_max": 100.0,
300
- "mt_chrf_min": 45.374,
301
- "mt_chrf_mean": 83.683,
302
- "mt_chrf_std": 16.754,
303
- "mt_ter_max": 100.0,
304
- "mt_ter_min": 0.0,
305
- "mt_ter_mean": 23.912,
306
- "mt_ter_std": 29.274,
307
- "mt_comet_max": 0.977,
308
- "mt_comet_min": 0.837,
309
- "mt_comet_mean": 0.94,
310
- "mt_comet_std": 0.042,
311
- "mt_xcomet_qe": 0.985,
312
- "mt_xcomet_errors": "[]",
313
- "pe_bleu_max": 100.0,
314
- "pe_bleu_min": 11.644,
315
- "pe_bleu_mean": 61.335,
316
- "pe_bleu_std": 28.617,
317
- "pe_chrf_max": 100.0,
318
- "pe_chrf_min": 53.0,
319
- "pe_chrf_mean": 79.173,
320
- "pe_chrf_std": 13.679,
321
- "pe_ter_max": 100.0,
322
- "pe_ter_min": 0.0,
323
- "pe_ter_mean": 28.814,
324
- "pe_ter_std": 28.827,
325
- "pe_comet_max": 0.977,
326
- "pe_comet_min": 0.851,
327
- "pe_comet_mean": 0.937,
328
- "pe_comet_std": 0.035,
329
- "pe_xcomet_qe": 0.984,
330
- "pe_xcomet_errors": "[]",
331
- # Behavioral data
332
- "doc_num_edits": 103,
333
- "doc_edit_order": 20,
334
- "doc_edit_time": 118,
335
- "doc_edit_time_filtered": 118,
336
- "doc_keys_per_min": 52.37,
337
- "doc_chars_per_min": 584.24,
338
- "doc_words_per_min": 79.83,
339
- "segment_num_edits": 9,
340
- "segment_edit_order": 3,
341
- "segment_edit_time": 9,
342
- "segment_edit_time_filtered": 9,
343
- "segment_keys_per_min": 60.0,
344
- "segment_chars_per_min": 906.67,
345
- "segment_words_per_min": 106.67,
346
- "num_enter_actions": 2,
347
- "remove_highlights": False,
348
- # Texts and annotations
349
- "src_text": "The speed of its emerging growth frequently outpaces the development of quality assurance and education.",
350
- "mt_text": "De snelheid van de opkomende groei is vaak sneller dan de ontwikkeling van kwaliteitsborging en onderwijs.",
351
- "mt_text_highlighted": "De snelheid van de opkomende groei is vaak <minor>sneller</minor> dan de ontwikkeling van kwaliteitsborging en <major>onderwijs.</major>",
352
- "pe_text": "De snelheid van de opkomende groei is vaak sneller dan de ontwikkeling van kwaliteitsborging en opleiding.",
353
- "mt_pe_word_aligned": "MT: De snelheid van de opkomende groei is vaak sneller dan de ontwikkeling van kwaliteitsborging en onderwijs.\n" \
354
- "PE: De snelheid van de opkomende groei is vaak sneller dan de ontwikkeling van kwaliteitsborging en opleiding.\n" \
355
- " S",
356
- "mt_pe_char_aligned": "MT: De snelheid van de opkomende groei is vaak sneller dan de ontwikkeling van kwaliteitsborging en onderwijs.\n" \
357
- "PE: De snelheid van de opkomende groei is vaak sneller dan de ontwikkeling van kwaliteitsborging en opleiding.\n" \
358
- " SS SS SS ",
359
- "highlights": """[
360
- {
361
- 'text': 'sneller',
362
- 'severity': 'minor',
363
- 'start': 43,
364
- 'end': 50
365
- },
366
- {
367
- 'text': 'onderwijs.',
368
- 'severity': 'major',
369
- 'start': 96,
370
- 'end': 106
371
- }
372
- ]"""
373
- # QA annotations
374
- "qa_mt_annotator_id": 'qa_nld_3',
375
- "qa_pe_annotator_id": 'qa_nld_1',
376
- "qa_mt_esa_rating": 100.0,
377
- "qa_pe_esa_rating": 80.0,
378
- "qa_mt_annotated_text": "De snelheid van de opkomende groei is vaak sneller dan de ontwikkeling van kwaliteitsborging en onderwijs.",
379
- "qa_pe_annotated_text": "De snelheid van de opkomende groei is vaak sneller dan de ontwikkeling van kwaliteitsborging en opleiding.",
380
- "qa_mt_fixed_text": "De snelheid van de opkomende groei is vaak sneller dan de ontwikkeling van kwaliteitsborging en onderwijs.",
381
- "qa_pe_fixed_text": "De snelheid van de ontluikende groei overtreft vaak de ontwikkeling van kwaliteitsborging en onderwijs.",
382
- "qa_mt_mqm_errors": "[]",
383
- "qa_pe_mqm_errors": """[
384
- {
385
- "text": "opkomende",
386
- "text_start": 19,
387
- "text_end": 28,
388
- "correction":
389
- "ontluikende",
390
- "correction_start": 19,
391
- "correction_end": 30,
392
- "description": "Mistranslation - not the correct word",
393
- "mqm_category": "Mistranslation",
394
- "severity": "Minor",
395
- "comment": "",
396
- "edit_order": 1
397
- }
398
- ]"""
399
-
400
- }
401
- ```
402
-
403
- The text is provided as-is, without further preprocessing or tokenization.
404
-
405
- ### Dataset Creation
406
-
407
- The datasets were parsed from GroTE inputs, logs and outputs for the QE4PE study, available in this repository. Processed dataframes using the `qe4pe process_task_data` command. Refer to the [QE4PE Github repository](https://github.com/gsarti/qe4pe) for additional details. The overall structure and processing of the dataset were inspired by the [DivEMT dataset](https://huggingface.co/datasets/GroNLP/divemt).
408
-
409
- ### MQM Annotations
410
-
411
- MQM annotations were collected using Google Sheets and highlights were parsed from HTML exported output, ensuring their compliance with well-formedness checks. Out of the original 51 docs (324 segments) in `main`, 24 docs (10 biomedical, 14 social, totaling 148 segments) were samples at random and annotated by professional translators.
412
-
413
- ## Additional Information
414
-
415
- ### Metric signatures
416
-
417
- The following signatures correspond to the metrics reported in the processed dataframes:
418
-
419
- ```shell
420
- # Computed using SacreBLEU: https://github.com/mjpost/sacrebleu
421
- BLEU: case:mixed|eff:yes|tok:13a|smooth:exp|version:2.3.1
422
- ChrF: case:mixed|eff:yes|nc:6|nw:0|space:no|version:2.3.1
423
- TER: case:lc|tok:tercom|norm:no|punct:yes|asian:no|version:2.3.1
424
-
425
- # Computed using Unbabel COMET: https://github.com/Unbabel/COMET
426
- Comet: Python3.11.9|Comet2.2.2|fp32|Unbabel/wmt22-comet-da
427
- XComet: Python3.10.12|Comet2.2.1|fp32|Unbabel/XCOMET-XXL
428
- ```
429
-
430
- ### Dataset Curators
431
-
432
- For problems related to this 🤗 Datasets version, please contact me at [[email protected]](mailto:[email protected]).
433
-
434
- ### Citation Information
435
-
436
- ```bibtex
437
- TODO
 
 
 
 
 
 
 
 
 
 
 
 
438
  ```
 
1
+ ---
2
+ language:
3
+ - en
4
+ - it
5
+ - nl
6
+ license:
7
+ - apache-2.0
8
+ tags:
9
+ - machine-translation
10
+ - quality-estimation
11
+ - post-editing
12
+ - translation
13
+ - behavioral-data
14
+ - multidimensional-quality-metric
15
+ - mqm
16
+ - comet
17
+ - qe
18
+ language_creators:
19
+ - machine-generated
20
+ - expert-generated
21
+ annotations_creators:
22
+ - machine-generated
23
+ pretty_name: qe4pe
24
+ size_categories:
25
+ - 10K<n<100K
26
+ source_datasets:
27
+ - Unbabel/TowerEval-Data-v0.1
28
+ task_categories:
29
+ - translation
30
+ configs:
31
+ - config_name: main
32
+ data_files:
33
+ - split: train
34
+ path: task/main/processed_main.csv
35
+ - config_name: pretask
36
+ data_files:
37
+ - split: train
38
+ path: task/pretask/processed_pretask.csv
39
+ - config_name: posttask
40
+ data_files:
41
+ - split: train
42
+ path: task/posttask/processed_posttask.csv
43
+ - config_name: pretask_questionnaire
44
+ data_files:
45
+ - split: train
46
+ path: questionnaires/pretask_results.csv
47
+ - config_name: posttask_highlight_questionnaire
48
+ data_files:
49
+ - split: train
50
+ path: questionnaires/posttask_highlight_results.csv
51
+ - config_name: posttask_no_highlight_questionnaire
52
+ data_files:
53
+ - split: train
54
+ path: questionnaires/posttask_no_highlight_results.csv
55
+ ---
56
+
57
+ # Quality Estimation for Post-Editing (QE4PE)
58
+
59
+ *For more details on QE4PE, see our [paper](TBD) and our [Github repository](https://github.com/gsarti/qe4pe)*
60
+
61
+ ## Dataset Description
62
+ - **Source:** [Github](https://github.com/gsarti/qe4pe)
63
+ - **Paper:** [Arxiv](https://arxiv.org/abs/2503.03044)
64
+ - **Point of Contact:** [Gabriele Sarti](mailto:[email protected])
65
+
66
+ [Gabriele Sarti](https://gsarti.com) • [Vilém Zouhar](https://vilda.net/) • [Grzegorz Chrupała](https://grzegorz.chrupala.me/) • [Ana Guerberof Arenas](https://scholar.google.com/citations?user=i6bqaTsAAAAJ) • [Malvina Nissim](https://malvinanissim.github.io/) • [Arianna Bisazza](https://www.cs.rug.nl/~bisazza/)
67
+
68
+
69
+ <p float="left">
70
+ <img src="https://github.com/gsarti/qe4pe/blob/main/figures/highlevel_qe4pe.png?raw=true" alt="QE4PE annotation pipeline" width=400/>
71
+ </p>
72
+
73
+ >Word-level quality estimation (QE) detects erroneous spans in machine translations, which can direct and facilitate human post-editing. While the accuracy of word-level QE systems has been assessed extensively, their usability and downstream influence on the speed, quality and editing choices of human post-editing remain understudied. Our QE4PE study investigates the impact of word-level QE on machine translation (MT) post-editing in a realistic setting involving 42 professional post-editors across two translation directions. We compare four error-span highlight modalities, including supervised and uncertainty-based word-level QE methods, for identifying potential errors in the outputs of a state-of-the-art neural MT model. Post-editing effort and productivity are estimated by behavioral logs, while quality improvements are assessed by word- and segment-level human annotation. We find that domain, language and editors' speed are critical factors in determining highlights' effectiveness, with modest differences between human-made and automated QE highlights underlining a gap between accuracy and usability in professional workflows.
74
+
75
+ ### Dataset Summary
76
+
77
+ This dataset provides a convenient access to the processed `pretask`, `main` and `posttask` splits and the questionnaires for the QE4PE study. A sample of challenging documents extracted from WMT23 evaluation data were machine translated from English to Italian and Dutch using [NLLB 3.3B](https://huggingface.co/facebook/nllb-200-3.3B), and post-edited by 12 translators per direction across 4 highlighting modalities employing various word-level quality estimation (QE) strategies to present translators with potential errors during the editing. Additional details are provided in the [main task readme](./task/main/README.md) and in our paper. During the post-editing, behavioral data (keystrokes, pauses and editing times) were collected using the [GroTE](https://github.com/gsarti/grote) online platform. For the main task, a subset of the data was annotated with Multidimensional Quality Metrics (MQM) by professional annotators.
78
+
79
+ We publicly release the granular editing logs alongside the processed dataset to foster new research on the usability of word-level QE strategies in modern post-editing workflows.
80
+
81
+ ### News 📢
82
+
83
+ **March 2025**: The QE4PE paper is available on [Arxiv](https://arxiv.org/abs/2503.03044).
84
+
85
+ **January 2025**: MQM annotations are now available for the `main` task.
86
+
87
+ **October 2024**: The QE4PE dataset is released on the HuggingFace Hub! 🎉
88
+
89
+ ### Repository Structure
90
+
91
+ The repository is organized as follows:
92
+
93
+ ```shell
94
+ qe4pe/
95
+ ├── questionnaires/ # Configs and results for pre- and post-task questionnaires for translators
96
+ ├── pretask_results.csv # Results of the pretask questionnaire, corresponding to the `pretask_questionnaire` configuration
97
+ │ ├── posttask_highlight_results.csv # Results of the posttask questionnaire for highlighted modalities, corresponding to the `posttask_highlight_questionnaire` configuration
98
+ │ ├── posttask_no_highlight_results.csv # Results of the posttask questionnaire for the `no_highlight` modality, corresponding to the `posttask_no_highlight_questionnaire` configuration
99
+ └── ... # Configurations reporting the exact questionnaires questions and options.
100
+ ├── setup/
101
+ │ ├── highlights/ # Outputs of word-level QE strategies used to setup highlighted spans in the tasks
102
+ ├── qa/ # MQM/ESA annotations for the main task
103
+ ├── processed/ # Intermediate outputs of the selection process for the main task
104
+ └── wmt23/ # Original collection of WMT23 sources and machine-translated outputs
105
+ └── task/
106
+ ├── example/ # Example folder with task structure
107
+ ├── main/ # Main task data, logs, outputs and guidelines
108
+ │ ├── ...
109
+ │ ├── processed_main.csv # Processed main task data, corresponds to the `main` configuration
110
+ │ └── README.md # Details about the main task
111
+ ├── posttask/ # Posttask task data, logs, outputs and guidelines
112
+ ├── ...
113
+ ├── processed_main.csv # Processed posttask task data, corresponds to the `posttask` configuration
114
+ └── README.md # Details about the post-task
115
+ └── pretask/ # Pretask data, logs, outputs and guidelines
116
+ ├── ...
117
+ ├── processed_pretask.csv # Processed pretask data, corresponds to the `pretask` configuration
118
+ └── README.md # Details about the pretask
119
+ ```
120
+
121
+ ### Languages
122
+
123
+ The language data of QE4PE is in English (BCP-47 `en`), Italian (BCP-47 `it`) and Dutch (BCP-47 `nl`).
124
+
125
+ ## Dataset Structure
126
+
127
+ ### Data Instances
128
+
129
+ The dataset contains two configurations, corresponding to the two tasks: `pretask`, `main` and `posttask`. `main` contains the full data collected during the main task and analyzed during our experiments. `pretask` contains the data collected in the initial verification phase before the main task, in which all translators worked on texts highlighted in the `supervised` modality. `posttask` contains the data collected in the final phase in which all translators worked on texts in the `no_highlight` modality.
130
+
131
+ ### Data Fields
132
+
133
+ A single entry in the dataframe represents a segment (~sentence) in the dataset, that was machine-translated and post-edited by a professional translator. The following fields are contained in the training set:
134
+
135
+ |Field |Description |
136
+ |------------------------|-------------------------------------------------------------------------------------------------------------------------------------|
137
+ | **Identification** | |
138
+ |`unit_id` | The full entry identifier. Format: `qe4pe-{task_id}-{src_lang}-{tgt_lang}-{doc_id}-{segment_in_doc_id}-{translator_main_task_id}`. |
139
+ |`wmt_id` | Identifier of the sentence in the original [WMT23](./data/setup/wmt23/wmttest2023.eng.jsonl) dataset. |
140
+ |`wmt_category` | Category of the document: `biomedical` or `social` |
141
+ |`doc_id` | The index of the document in the current configuration of the QE4PE dataset containing the current segment. |
142
+ |`segment_in_doc_id` | The index of the segment inside the current document. |
143
+ |`segment_id` | The index of the segment in the current configurations (i.e. concatenating all segments from all documents in order) |
144
+ |`translator_pretask_id` | The identifier for the translator according to the `pretask` format before modality assignments: `tXX`. |
145
+ |`translator_main_id` | The identifier for the translator according to the `main` task format after modality assignments: `{highlight_modality}_tXX`. |
146
+ |`src_lang` | The source language of the segment. For QE4PE, this is always English (`eng`) |
147
+ |`tgt_lang` | The target language of the segment: either Italian (`ita`) or Dutch (`nld`). |
148
+ |`highlight_modality` | The highlighting modality used for the segment. Values: `no_highlight`, `oracle`, `supervised`, `unsupervised`. |
149
+ | **Text statistics** | |
150
+ |`src_num_chars` | Length of the source segment in number of characters. |
151
+ |`mt_num_chars` | Length of the machine-translated segment in number of characters. |
152
+ |`pe_num_chars` | Length of the post-edited segment in number of characters. |
153
+ |`src_num_words` | Length of the source segment in number of words. |
154
+ |`mt_num_words` | Length of the machine-translated segment in number of words. |
155
+ |`pe_num_words` | Length of the post-edited segment in number of words. |
156
+ |`num_minor_highlighted_chars` | Number of characters highlighted as minor errors in the machine-translated text. |
157
+ |`num_major_highlighted_chars` | Number of characters highlighted as major errors in the machine-translated text. |
158
+ |`num_minor_highlighted_words` | Number of words highlighted as minor errors in the machine-translated text. |
159
+ |`num_major_highlighted_words` | Number of words highlighted as major errors in the machine-translated text. |
160
+ | **Edits statistics** | |
161
+ |`num_words_insert` | Number of post-editing insertions computed using [jiwer](https://github.com/jitsi/jiwer). |
162
+ |`num_words_delete` | Number of post-editing deletions computed using [jiwer](https://github.com/jitsi/jiwer). |
163
+ |`num_words_substitute` | Number of post-editing substitutions computed using [jiwer](https://github.com/jitsi/jiwer). |
164
+ |`num_words_unchanged` | Number of post-editing hits computed using [jiwer](https://github.com/jitsi/jiwer). |
165
+ |`tot_words_edits` | Total of all edit types for the sentence. |
166
+ |`wer` | Word Error Rate score computed between `mt_text` and `pe_text` using [jiwer](https://github.com/jitsi/jiwer). |
167
+ |`num_chars_insert` | Number of post-editing insertions computed using [jiwer](https://github.com/jitsi/jiwer). |
168
+ |`num_chars_delete` | Number of post-editing deletions computed using [jiwer](https://github.com/jitsi/jiwer). |
169
+ |`num_chars_substitute` | Number of post-editing substitutions computed using [jiwer](https://github.com/jitsi/jiwer). |
170
+ |`num_chars_unchanged` | Number of post-editing hits computed using [jiwer](https://github.com/jitsi/jiwer). |
171
+ |`tot_chars_edits` | Total of all edit types for the sentence. |
172
+ |`cer` | Character Error Rate score computed between `mt_text` and `pe_text` using [jiwer](https://github.com/jitsi/jiwer). |
173
+ | **Translation quality**| |
174
+ |`mt_bleu_max` | Max BLEU score between `mt_text` and all `pe_text` for the corresponding segment using SacreBLEU with default parameters. |
175
+ |`mt_bleu_min` | Min BLEU score between `mt_text` and all `pe_text` for the corresponding segment using SacreBLEU with default parameters. |
176
+ |`mt_bleu_mean` | Mean BLEU score between `mt_text` and all `pe_text` for the corresponding segment using SacreBLEU with default parameters. |
177
+ |`mt_bleu_std` | Standard deviation of BLEU scores between `mt_text` and all `pe_text` for the corresponding segment using SacreBLEU with default parameters. |
178
+ |`mt_chrf_max` | Max chrF score between `mt_text` and all `pe_text` for the corresponding segment using SacreBLEU with default parameters. |
179
+ |`mt_chrf_min` | Min chrF score between `mt_text` and all `pe_text` for the corresponding segment using SacreBLEU with default parameters. |
180
+ |`mt_chrf_mean` | Mean chrF score between `mt_text` and all `pe_text` for the corresponding segment using SacreBLEU with default parameters. |
181
+ |`mt_chrf_std` | Standard deviation of chrF scores between `mt_text` and all `pe_text` for the corresponding segment using SacreBLEU with default parameters. |
182
+ |`mt_ter_max` | Max TER score between `mt_text` and all `pe_text` for the corresponding segment using SacreBLEU with default parameters. |
183
+ |`mt_ter_min` | Min TER score between `mt_text` and all `pe_text` for the corresponding segment using SacreBLEU with default parameters. |
184
+ |`mt_ter_mean` | Mean TER score between `mt_text` and all `pe_text` for the corresponding segment using SacreBLEU with default parameters. |
185
+ |`mt_ter_std` | Standard deviation of TER scores between `mt_text` and all `pe_text` for the corresponding segment using SacreBLEU with default parameters. |
186
+ |`mt_comet_max` | Max COMET sentence-level score for the `mt_text` and all `pe_text` for the corresponding segment using `Unbabel/wmt22-comet-da` with default parameters. |
187
+ |`mt_comet_min` | Min COMET sentence-level score for the `mt_text` and all `pe_text` for the corresponding segment using `Unbabel/wmt22-comet-da` with default parameters. |
188
+ |`mt_comet_mean` | Mean COMET sentence-level score for the `mt_text` and all `pe_text` for the corresponding segment using `Unbabel/wmt22-comet-da` with default parameters.|
189
+ |`mt_comet_std` | Standard deviation of COMET sentence-level scores for the `mt_text` and all `pe_text` for the corresponding segment using `Unbabel/wmt22-comet-da` with default parameters. |
190
+ |`mt_xcomet_qe` | `Unbabel/XCOMET-XXL` sentence-level quality estimation score for the mt_text. |
191
+ |`mt_xcomet_errors` | List of error spans detected by `Unbabel/XCOMET-XXL` for the mt_text. |
192
+ |`pe_bleu_max` | Max BLEU score between `pe_text` and all other `pe_text` for the corresponding segment using SacreBLEU with default parameters. |
193
+ |`pe_bleu_min` | Min BLEU score between `pe_text` and all other `pe_text` for the corresponding segment using SacreBLEU with default parameters. |
194
+ |`pe_bleu_mean` | Mean BLEU score between `pe_text` and all other `pe_text` for the corresponding segment using SacreBLEU with default parameters. |
195
+ |`pe_bleu_std` | Standard deviation of BLEU scores between `pe_text` and all other `pe_text` for the corresponding segment using SacreBLEU with default parameters. |
196
+ |`pe_chrf_max` | Max chrF score between `pe_text` and all other `pe_text` for the corresponding segment using SacreBLEU with default parameters. |
197
+ |`pe_chrf_min` | Min chrF score between `pe_text` and all other `pe_text` for the corresponding segment using SacreBLEU with default parameters. |
198
+ |`pe_chrf_mean` | Mean chrF score between `pe_text` and all other `pe_text` for the corresponding segment using SacreBLEU with default parameters. |
199
+ |`pe_chrf_std` | Standard deviation of chrF scores between `pe_text` and all other `pe_text` for the corresponding segment using SacreBLEU with default parameters. |
200
+ |`pe_ter_max` | Max TER score between `pe_text` and all other `pe_text` for the corresponding segment using SacreBLEU with default parameters. |
201
+ |`pe_ter_min` | Min TER score between `pe_text` and all other `pe_text` for the corresponding segment using SacreBLEU with default parameters. |
202
+ |`pe_ter_mean` | Mean TER score between `pe_text` and all other `pe_text` for the corresponding segment using SacreBLEU with default parameters. |
203
+ |`pe_ter_std` | Standard deviation of TER scores between `pe_text` and all other `pe_text` for the corresponding segment using SacreBLEU with default parameters. |
204
+ |`pe_comet_max` | Max COMET sentence-level score for the `pe_text` and all other `pe_text` for the corresponding segment using `Unbabel/wmt22-comet-da` with default parameters. |
205
+ |`pe_comet_min` | Min COMET sentence-level score for the `pe_text` and all other `pe_text` for the corresponding segment using `Unbabel/wmt22-comet-da` with default parameters. |
206
+ |`pe_comet_mean` | Mean COMET sentence-level score for the `pe_text` and all other `pe_text` for the corresponding segment using `Unbabel/wmt22-comet-da` with default parameters.|
207
+ |`pe_comet_std` | Standard deviation of COMET sentence-level scores for the `pe_text` and all other `pe_text` for the corresponding segment using Unbabel/wmt22-comet-da with default parameters. |
208
+ |`pe_xcomet_qe` | `Unbabel/XCOMET-XXL` sentence-level quality estimation score for the pe_text. |
209
+ |`pe_xcomet_errors` | List of error spans detected by `Unbabel/XCOMET-XXL` for the pe_text. |
210
+ | **Behavioral data** | |
211
+ |`doc_num_edits` | Total number of edits performed by the translator on the current document. Only the last edit outputs are considered valid. |
212
+ |`doc_edit_order` | Index corresponding to the current document edit order. If equal to `doc_id`, the document was edited in the given order. |
213
+ |`doc_edit_time` | Total editing time for the current document in seconds (from `start` to `end`, no times ignored) |
214
+ |`doc_edit_time_filtered`| Total editing time for the current document in seconds (from `start` to `end`, >5m pauses between logged actions ignored) |
215
+ |`doc_keys_per_min` | Keystrokes per minute computed for the current document using `doc_edit_time_filtered`. |
216
+ |`doc_chars_per_min` | Characters per minute computed for the current document using `doc_edit_time_filtered`. |
217
+ |`doc_words_per_min` | Words per minute computed for the current document using `doc_edit_time_filtered`. |
218
+ |`segment_num_edits` | Total number of edits performed by the translator on the current segment. Only edits for the last edit of the doc are considered valid. |
219
+ |`segment_edit_order` | Index corresponding to the current segment edit order (only first `enter` action counts). If equal to `segment_in_doc_id`, the segment was edited in the given order. |
220
+ |`segment_edit_time` | Total editing time for the current segment in seconds (summed time between `enter`-`exit` blocks) |
221
+ |`segment_edit_time_filtered` | Total editing time for the current segment in seconds (>5m pauses between logged actions ignored). |
222
+ |`segment_keys_per_min` | Keystrokes per minute computed for the current segment using `segment_edit_time_filtered`. |
223
+ |`segment_chars_per_min` | Characters per minute computed for the current segment using `segment_edit_time_filtered`. |
224
+ |`segment_words_per_min` | Words per minute computed for the current segment using `segment_edit_time_filtered`. |
225
+ |`num_enter_actions` | Number of `enter` actions (focus on textbox) performed by the translator on the current segment during post-editing. |
226
+ |`remove_highlights` | If True, the Clear Highlights button was pressed for this segment (always false for `no_highlight` modality). |
227
+ |**Texts and annotations**| |
228
+ |`src_text` | The original source segment from WMT23 requiring translation. |
229
+ |`mt_text` | Output of the `NLLB-3.3B` model when translating `src_text` into `tgt_lang` (default config, 5 beams) |
230
+ |`mt_text_highlighted` | Highlighted version of `mt_text` with potential errors according to the `highlight_modality`. |
231
+ |`pe_text` | Post-edited version of `mt_text` produced by a professional translator with `highlight_modality`. |
232
+ |`mt_pe_word_aligned` | Aligned visual representation of word-level edit operations (I = Insertion, D = Deletion, S = Substitution) (replace `\\n` with `\n` to show the three aligned rows). |
233
+ |`mt_pe_char_aligned` | Aligned visual representation of character-level edit operations (I = Insertion, D = Deletion, S = Substitution) (replace `\\n` with `\n` to show the three aligned rows). |
234
+ |`highlights` | List of dictionaries for highlighted spans with error severity and position, matching XCOMET format for word-level error annotations. |
235
+ |**MQM annotations (`main` config only)**| |
236
+ |`qa_mt_annotator_id` | Annotator ID for the MQM evaluation of `qa_mt_annotated_text`. |
237
+ |`qa_pe_annotator_id` | Annotator ID for the MQM evaluation of `qa_pe_annotated_text`. |
238
+ |`qa_mt_esa_rating` | 0-100 quality rating for the `qa_mt_annotated_text` translation, following the [ESA framework](https://aclanthology.org/2024.wmt-1.131/). |
239
+ |`qa_pe_esa_rating` | 0-100 quality rating for the `qa_pe_annotated_text` translation, following the [ESA framework](https://aclanthology.org/2024.wmt-1.131/). |
240
+ |`qa_mt_annotated_text` | Version of `mt_text` annotated with MQM errors. Might differ (only slightly) from `mt_text`, included since `qa_mt_mqm_errors` indices are computed on this string. |
241
+ |`qa_pe_annotated_text` | Version of `pe_text` annotated with MQM errors. Might differ (only slightly) from `pe_text`, included since `qa_pe_mqm_errors` indices are computed on this string. |
242
+ |`qa_mt_fixed_text` | Proposed correction of `mqm_mt_annotated_text` following MQM annotation. |
243
+ |`qa_pe_fixed_text` | Proposed correction of `mqm_pe_annotated_text` following MQM annotation. |
244
+ |`qa_mt_mqm_errors` | List of error spans detected by the MQM annotator for the `qa_mt_annotated_text`. Each error span dictionary contains the following fields: `text`: the span in `mqm_mt_annotated_text` containing an error. `text_start`: the start index of the error span in `qa_mt_annotated_text`. -1 if no annotated span is present (e.g. for omissions) `text_end`: the end index of the error span in `qa_mt_annotated_text`. -1 if no annotated span is present (e.g. for omissions) `correction`: the proposed correction in `qa_mt_fixed_text` for the error span in `qa_mt_annotated_text`. `correction_start`: the start index of the error span in `mqm_mt_fixed_text`. -1 if no corrected span is present (e.g. for additions) `correction_end`: the end index of the error span in `qa_mt_fixed_text`. -1 if no corrected span is present (e.g. for additions) `description`: an optional error description provided by the annotator. `mqm_category`: the error category assigned by the annotator for the current span. One of: Addition, Omission, Mistranslation, Inconsistency, Untranslated, Punctuation, Spelling, Grammar, Inconsistent Style, Readability, Wrong Register. `severity`: the error severity for the current span. One of: Minor, Major, Neutral. `comment`: an optional comment provided by the annotator for the current span. `edit_order`: index of the edit in the current segment edit order (starting from 1). |
245
+ |`qa_pe_mqm_errors` | List of error spans detected by the MQM annotator for the `qa_pe_annotated_text`. Each error span dictionary contains the following fields: `text`: the span in `qa_pe_annotated_text` containing an error. `text_start`: the start index of the error span in `qa_pe_annotated_text`. -1 if no annotated span is present (e.g. for omissions) `text_end`: the end index of the error span in `qa_pe_annotated_text`. -1 if no annotated span is present (e.g. for omissions) `correction`: the proposed correction in `qa_pe_fixed_text` for the error span in `qa_pe_annotated_text`. `correction_start`: the start index of the error span in `qa_pe_fixed_text`. -1 if no corrected span is present (e.g. for additions) `correction_end`: the end index of the error span in `qa_pe_fixed_text`. -1 if no corrected span is present (e.g. for additions) `description`: an optional error description provided by the annotator. `mqm_category`: the error category assigned by the annotator for the current span. One of: Addition, Omission, Mistranslation, Inconsistency, Untranslated, Punctuation, Spelling, Grammar, Inconsistent Style, Readability, Wrong Register. `severity`: the error severity for the current span. One of: Minor, Major, Neutral. `comment`: an optional comment provided by the annotator for the current span. `edit_order`: index of the edit in the current segment edit order (starting from 1). |
246
+
247
+ ### Data Splits
248
+
249
+ |`config` | `split`| |
250
+ |------------------------------------:|-------:|--------------------------------------------------------------:|
251
+ |`main` | `train`| 8100 (51 docs i.e. 324 sents x 25 translators) |
252
+ |`pretask` | `train`| 950 (6 docs i.e. 38 sents x 25 translators) |
253
+ |`posttask` | `train`| 1200 (8 docs i.e. 50 sents x 24 translators) |
254
+ |`pretask_questionnaire` | `train`| 26 (all translators, including replaced/replacements) |
255
+ |`posttask_highlight_questionnaire` | `train`| 19 (all translators for highlight modalities + 1 replacement) |
256
+ |`posttask_no_highlight_questionnaire`| `train`| 6 (all translators for `no_highlight` modality) |
257
+
258
+ #### Train Split
259
+
260
+ The `train` split contains the totality of triplets (or pairs, when translation from scratch is performed) annotated with behavioral data produced during the translation.
261
+
262
+ The following is an example of the subject `oracle_t1` post-editing for segment `3` of `doc20` in the `eng-nld` direction of the `main` task. The fields `mt_pe_word_aligned` and `mt_pe_char_aligned` are shown over three lines to provide a visual understanding of their contents.
263
+
264
+ ```python
265
+ {
266
+ # Identification
267
+ "unit_id": "qe4pe-main-eng-nld-20-3-oracle_t1",
268
+ "wmt_id": "doc5",
269
+ "wmt_category": "biomedical",
270
+ "doc_id": 20,
271
+ "segment_in_doc_id": 3,
272
+ "segment_id": 129,
273
+ "translator_pretask_id": "t4",
274
+ "translator_main_id": "oracle_t1",
275
+ "src_lang": "eng",
276
+ "tgt_lang": "nld",
277
+ "highlight_modality": "oracle",
278
+ # Text statistics
279
+ "src_num_chars": 104,
280
+ "mt_num_chars": 136,
281
+ "pe_num_chars": 106,
282
+ "src_num_words": 15,
283
+ "mt_num_words": 16,
284
+ "pe_num_words": 16,
285
+ # Edits statistics
286
+ "num_words_insert": 0,
287
+ "num_words_delete": 0,
288
+ "num_words_substitute": 1,
289
+ "num_words_unchanged": 15,
290
+ "tot_words_edits": 1,
291
+ "wer": 0.0625,
292
+ "num_chars_insert": 0,
293
+ "num_chars_delete": 0,
294
+ "num_chars_substitute": 6,
295
+ "num_chars_unchanged": 100,
296
+ "tot_chars_edits": 6,
297
+ "cer": 0.0566,
298
+ # Translation quality
299
+ "mt_bleu_max": 100.0,
300
+ "mt_bleu_min": 7.159,
301
+ "mt_bleu_mean": 68.687,
302
+ "mt_bleu_std": 31.287,
303
+ "mt_chrf_max": 100.0,
304
+ "mt_chrf_min": 45.374,
305
+ "mt_chrf_mean": 83.683,
306
+ "mt_chrf_std": 16.754,
307
+ "mt_ter_max": 100.0,
308
+ "mt_ter_min": 0.0,
309
+ "mt_ter_mean": 23.912,
310
+ "mt_ter_std": 29.274,
311
+ "mt_comet_max": 0.977,
312
+ "mt_comet_min": 0.837,
313
+ "mt_comet_mean": 0.94,
314
+ "mt_comet_std": 0.042,
315
+ "mt_xcomet_qe": 0.985,
316
+ "mt_xcomet_errors": "[]",
317
+ "pe_bleu_max": 100.0,
318
+ "pe_bleu_min": 11.644,
319
+ "pe_bleu_mean": 61.335,
320
+ "pe_bleu_std": 28.617,
321
+ "pe_chrf_max": 100.0,
322
+ "pe_chrf_min": 53.0,
323
+ "pe_chrf_mean": 79.173,
324
+ "pe_chrf_std": 13.679,
325
+ "pe_ter_max": 100.0,
326
+ "pe_ter_min": 0.0,
327
+ "pe_ter_mean": 28.814,
328
+ "pe_ter_std": 28.827,
329
+ "pe_comet_max": 0.977,
330
+ "pe_comet_min": 0.851,
331
+ "pe_comet_mean": 0.937,
332
+ "pe_comet_std": 0.035,
333
+ "pe_xcomet_qe": 0.984,
334
+ "pe_xcomet_errors": "[]",
335
+ # Behavioral data
336
+ "doc_num_edits": 103,
337
+ "doc_edit_order": 20,
338
+ "doc_edit_time": 118,
339
+ "doc_edit_time_filtered": 118,
340
+ "doc_keys_per_min": 52.37,
341
+ "doc_chars_per_min": 584.24,
342
+ "doc_words_per_min": 79.83,
343
+ "segment_num_edits": 9,
344
+ "segment_edit_order": 3,
345
+ "segment_edit_time": 9,
346
+ "segment_edit_time_filtered": 9,
347
+ "segment_keys_per_min": 60.0,
348
+ "segment_chars_per_min": 906.67,
349
+ "segment_words_per_min": 106.67,
350
+ "num_enter_actions": 2,
351
+ "remove_highlights": False,
352
+ # Texts and annotations
353
+ "src_text": "The speed of its emerging growth frequently outpaces the development of quality assurance and education.",
354
+ "mt_text": "De snelheid van de opkomende groei is vaak sneller dan de ontwikkeling van kwaliteitsborging en onderwijs.",
355
+ "mt_text_highlighted": "De snelheid van de opkomende groei is vaak <minor>sneller</minor> dan de ontwikkeling van kwaliteitsborging en <major>onderwijs.</major>",
356
+ "pe_text": "De snelheid van de opkomende groei is vaak sneller dan de ontwikkeling van kwaliteitsborging en opleiding.",
357
+ "mt_pe_word_aligned": "MT: De snelheid van de opkomende groei is vaak sneller dan de ontwikkeling van kwaliteitsborging en onderwijs.\n" \
358
+ "PE: De snelheid van de opkomende groei is vaak sneller dan de ontwikkeling van kwaliteitsborging en opleiding.\n" \
359
+ " S",
360
+ "mt_pe_char_aligned": "MT: De snelheid van de opkomende groei is vaak sneller dan de ontwikkeling van kwaliteitsborging en onderwijs.\n" \
361
+ "PE: De snelheid van de opkomende groei is vaak sneller dan de ontwikkeling van kwaliteitsborging en opleiding.\n" \
362
+ " SS SS SS ",
363
+ "highlights": """[
364
+ {
365
+ 'text': 'sneller',
366
+ 'severity': 'minor',
367
+ 'start': 43,
368
+ 'end': 50
369
+ },
370
+ {
371
+ 'text': 'onderwijs.',
372
+ 'severity': 'major',
373
+ 'start': 96,
374
+ 'end': 106
375
+ }
376
+ ]"""
377
+ # QA annotations
378
+ "qa_mt_annotator_id": 'qa_nld_3',
379
+ "qa_pe_annotator_id": 'qa_nld_1',
380
+ "qa_mt_esa_rating": 100.0,
381
+ "qa_pe_esa_rating": 80.0,
382
+ "qa_mt_annotated_text": "De snelheid van de opkomende groei is vaak sneller dan de ontwikkeling van kwaliteitsborging en onderwijs.",
383
+ "qa_pe_annotated_text": "De snelheid van de opkomende groei is vaak sneller dan de ontwikkeling van kwaliteitsborging en opleiding.",
384
+ "qa_mt_fixed_text": "De snelheid van de opkomende groei is vaak sneller dan de ontwikkeling van kwaliteitsborging en onderwijs.",
385
+ "qa_pe_fixed_text": "De snelheid van de ontluikende groei overtreft vaak de ontwikkeling van kwaliteitsborging en onderwijs.",
386
+ "qa_mt_mqm_errors": "[]",
387
+ "qa_pe_mqm_errors": """[
388
+ {
389
+ "text": "opkomende",
390
+ "text_start": 19,
391
+ "text_end": 28,
392
+ "correction":
393
+ "ontluikende",
394
+ "correction_start": 19,
395
+ "correction_end": 30,
396
+ "description": "Mistranslation - not the correct word",
397
+ "mqm_category": "Mistranslation",
398
+ "severity": "Minor",
399
+ "comment": "",
400
+ "edit_order": 1
401
+ }
402
+ ]"""
403
+
404
+ }
405
+ ```
406
+
407
+ The text is provided as-is, without further preprocessing or tokenization.
408
+
409
+ ### Dataset Creation
410
+
411
+ The datasets were parsed from GroTE inputs, logs and outputs for the QE4PE study, available in this repository. Processed dataframes using the `qe4pe process_task_data` command. Refer to the [QE4PE Github repository](https://github.com/gsarti/qe4pe) for additional details. The overall structure and processing of the dataset were inspired by the [DivEMT dataset](https://huggingface.co/datasets/GroNLP/divemt).
412
+
413
+ ### QA Annotations
414
+
415
+ MQM annotations were collected using Google Sheets and highlights were parsed from HTML exported output, ensuring their compliance with well-formedness checks. Out of the original 51 docs (324 segments) in `main`, 24 docs (10 biomedical, 14 social, totaling 148 segments) were samples at random and annotated by professional translators.
416
+
417
+ ## Additional Information
418
+
419
+ ### Metric signatures
420
+
421
+ The following signatures correspond to the metrics reported in the processed dataframes:
422
+
423
+ ```shell
424
+ # Computed using SacreBLEU: https://github.com/mjpost/sacrebleu
425
+ BLEU: case:mixed|eff:yes|tok:13a|smooth:exp|version:2.3.1
426
+ ChrF: case:mixed|eff:yes|nc:6|nw:0|space:no|version:2.3.1
427
+ TER: case:lc|tok:tercom|norm:no|punct:yes|asian:no|version:2.3.1
428
+
429
+ # Computed using Unbabel COMET: https://github.com/Unbabel/COMET
430
+ Comet: Python3.11.9|Comet2.2.2|fp32|Unbabel/wmt22-comet-da
431
+ XComet: Python3.10.12|Comet2.2.1|fp32|Unbabel/XCOMET-XXL
432
+ ```
433
+
434
+ ### Dataset Curators
435
+
436
+ For problems related to this 🤗 Datasets version, please contact me at [[email protected]](mailto:[email protected]).
437
+
438
+ ### Citation Information
439
+
440
+ ```bibtex
441
+ @misc{sarti-etal-2024-qe4pe,
442
+ title={{QE4PE}: Word-level Quality Estimation for Human Post-Editing},
443
+ author={Gabriele Sarti and Vilém Zouhar and Grzegorz Chrupała and Ana Guerberof-Arenas and Malvina Nissim and Arianna Bisazza},
444
+ year={2025},
445
+ eprint={2503.03044},
446
+ archivePrefix={arXiv},
447
+ primaryClass={cs.CL},
448
+ url={https://arxiv.org/abs/2503.03044},
449
+ }
450
  ```