alirezamsh commited on
Commit
db2f441
1 Parent(s): 4953360

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +33 -14
README.md CHANGED
@@ -192,21 +192,40 @@ Afrikaans (af), Amharic (am), Arabic (ar), Asturian (ast), Azerbaijani (az), Bas
192
 
193
  If you use this model for your research, please cite the following work:
194
  ```
195
- @misc{mohammadshahi2022small100,
196
- title={SMaLL-100: Introducing Shallow Multilingual Machine Translation Model for Low-Resource Languages},
197
- author={Alireza Mohammadshahi and Vassilina Nikoulina and Alexandre Berard and Caroline Brun and James Henderson and Laurent Besacier},
198
- year={2022},
199
- eprint={2210.11621},
200
- archivePrefix={arXiv},
201
- primaryClass={cs.CL}
 
 
 
 
 
 
 
 
 
202
  }
203
 
204
- @misc{mohammadshahi2022compressed,
205
- title={What Do Compressed Multilingual Machine Translation Models Forget?},
206
- author={Alireza Mohammadshahi and Vassilina Nikoulina and Alexandre Berard and Caroline Brun and James Henderson and Laurent Besacier},
207
- year={2022},
208
- eprint={2205.10828},
209
- archivePrefix={arXiv},
210
- primaryClass={cs.CL}
 
 
 
 
 
 
 
 
 
211
  }
 
212
  ```
 
192
 
193
  If you use this model for your research, please cite the following work:
194
  ```
195
+ @inproceedings{mohammadshahi-etal-2022-small,
196
+ title = "{SM}a{LL}-100: Introducing Shallow Multilingual Machine Translation Model for Low-Resource Languages",
197
+ author = "Mohammadshahi, Alireza and
198
+ Nikoulina, Vassilina and
199
+ Berard, Alexandre and
200
+ Brun, Caroline and
201
+ Henderson, James and
202
+ Besacier, Laurent",
203
+ booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
204
+ month = dec,
205
+ year = "2022",
206
+ address = "Abu Dhabi, United Arab Emirates",
207
+ publisher = "Association for Computational Linguistics",
208
+ url = "https://aclanthology.org/2022.emnlp-main.571",
209
+ pages = "8348--8359",
210
+ abstract = "In recent years, multilingual machine translation models have achieved promising performance on low-resource language pairs by sharing information between similar languages, thus enabling zero-shot translation. To overcome the {``}curse of multilinguality{''}, these models often opt for scaling up the number of parameters, which makes their use in resource-constrained environments challenging. We introduce SMaLL-100, a distilled version of the M2M-100(12B) model, a massively multilingual machine translation model covering 100 languages. We train SMaLL-100 with uniform sampling across all language pairs and therefore focus on preserving the performance of low-resource languages. We evaluate SMaLL-100 on different low-resource benchmarks: FLORES-101, Tatoeba, and TICO-19 and demonstrate that it outperforms previous massively multilingual models of comparable sizes (200-600M) while improving inference latency and memory usage. Additionally, our model achieves comparable results to M2M-100 (1.2B), while being 3.6x smaller and 4.3x faster at inference.",
211
  }
212
 
213
+ @inproceedings{mohammadshahi-etal-2022-compressed,
214
+ title = "What Do Compressed Multilingual Machine Translation Models Forget?",
215
+ author = "Mohammadshahi, Alireza and
216
+ Nikoulina, Vassilina and
217
+ Berard, Alexandre and
218
+ Brun, Caroline and
219
+ Henderson, James and
220
+ Besacier, Laurent",
221
+ booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2022",
222
+ month = dec,
223
+ year = "2022",
224
+ address = "Abu Dhabi, United Arab Emirates",
225
+ publisher = "Association for Computational Linguistics",
226
+ url = "https://aclanthology.org/2022.findings-emnlp.317",
227
+ pages = "4308--4329",
228
+ abstract = "Recently, very large pre-trained models achieve state-of-the-art results in various natural language processing (NLP) tasks, but their size makes it more challenging to apply them in resource-constrained environments. Compression techniques allow to drastically reduce the size of the models and therefore their inference time with negligible impact on top-tier metrics. However, the general performance averaged across multiple tasks and/or languages may hide a drastic performance drop on under-represented features, which could result in the amplification of biases encoded by the models. In this work, we assess the impact of compression methods on Multilingual Neural Machine Translation models (MNMT) for various language groups, gender, and semantic biases by extensive analysis of compressed models on different machine translation benchmarks, i.e. FLORES-101, MT-Gender, and DiBiMT. We show that the performance of under-represented languages drops significantly, while the average BLEU metric only slightly decreases. Interestingly, the removal of noisy memorization with compression leads to a significant improvement for some medium-resource languages. Finally, we demonstrate that compression amplifies intrinsic gender and semantic biases, even in high-resource languages.",
229
  }
230
+
231
  ```