--- license: mit language: - en - ja --- # NAIST-NICT WMT’23 General MT Task Submission ## Model Details ### Model Description Translation models for submission to WMT'23 English ↔ Japanese general machine translation task. This repository provides: - seven models per language direction using various combinations of hyperparameters ( `ckpt/` ) - a datastore per language direction for kNN-MT ( `index/` ) For more details, please see [NAIST-NICT WMT’23 General MT Task Submission](https://aclanthology.org/2023.wmt-1.7/). - **Developed by:** Hiroyuki Deguchi, Kenji Imamura, Yuto Nishida, Yusuke Sakai, Justin Vasselli, Taro Watanabe. - **Model type:** Translation model - **Language pairs:** Japanese-to-English and English-to-Japanese - **License:** MIT Licence ## How to Get Started with the Model You can use our models with [fairseq](https://github.com/facebookresearch/fairseq). ``` git clone https://github.com/pytorch/fairseq cd fairseq pip install --editable ./ ``` ### Preprocess First preprocess the data: ``` DATA_BIN= fairseq-preprocess --source-lang --target-lang \ --testpref \ --destdir ${DATA_BIN} \ --workers 20 ``` ### Beam Search Inference with beam search: ``` fairseq-generate \ --gen-subset test \ --task translation \ --source-lang \ --target-lang \ --path \ --nbest 50 \ --beam 50 \ --max-tokens 1024 \ --required-batch-size-multiple 1 \ ${DATA_BIN}/ ``` ### Ensemble Inference with model ensembling: ``` MODEL1= MODEL2= ... MODEL7= fairseq-generate \ --gen-subset test \ --task translation \ --source-lang \ --target-lang \ --path ${MODEL1}:${MODEL2}:${MODEL3}:${MODEL4}:${MODEL5}:${MODEL6}:${MODEL7} \ --seed 0 \ --nbest 50 \ --beam 50 \ --max-tokens 1024 \ --required-batch-size-multiple 1 \ ${DATA_BIN}/ ``` ### Diversified Decoding (Nucleus Sampling) Inference with nucleus (top-p) sampling: ``` fairseq-generate \ --gen-subset test \ --task translation \ --source-lang \ --target-lang \ --seed 0 \ --path \ --nbest 50 \ --beam 50 \ --max-tokens 1024 \ --sampling \ --sampling-topp \ --required-batch-size-multiple 1 \ ${DATA_BIN}/ ``` ### kNN-MT #### Concat index files We uploaded splitted index files. You can concat files and check md5sum as follows: ``` echo '68b29d7d1483c88b33804828854b28d7' > original.md5 # for English echo '77ecbd3aaad7f48814f1c4ae95821256' > original.md5 # for Japanese cat index.ffn_in.l2.bin.part* > index.ffn_in.l2.bin.reconstructed md5sum index.ffn_in.l2.bin.reconstructed > reconstructed.md5 diff original.md5 reconstructed.md5 ``` #### Inference You can use [knn-seq](https://github.com/naist-nlp/knn-seq). ## Citation **BibTeX:** ``` @inproceedings{deguchi-etal-2023-naist, title = "{NAIST}-{NICT} {WMT}{'}23 General {MT} Task Submission", author = "Deguchi, Hiroyuki and Imamura, Kenji and Nishida, Yuto and Sakai, Yusuke and Vasselli, Justin and Watanabe, Taro", editor = "Koehn, Philipp and Haddow, Barry and Kocmi, Tom and Monz, Christof", booktitle = "Proceedings of the Eighth Conference on Machine Translation", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wmt-1.7", doi = "10.18653/v1/2023.wmt-1.7", pages = "110--118", abstract = "In this paper, we describe our NAIST-NICT submission to the WMT{'}23 English ↔ Japanese general machine translation task. Our system generates diverse translation candidates and reranks them using a two-stage reranking system to find the best translation. First, we generated 50 candidates each from 18 translation methods using a variety of techniques to increase the diversity of the translation candidates. We trained seven models per language direction using various combinations of hyperparameters. From these models we used various decoding algorithms, ensembling the models, and using kNN-MT (Khandelwal et al., 2021). We processed the 900 translation candidates through a two-stage reranking system to find the most promising candidate. In the first step, we compared 50 candidates from each translation method using DrNMT (Lee et al., 2021) and returned the candidate with the best score. We ranked the final 18 candidates using COMET-MBR (Fernandes et al., 2022) and returned the best score as the system output. We found that generating diverse translation candidates improved translation quality using the well-designed reranker model.", } ```