File size: 5,191 Bytes
dadf922
 
 
 
 
 
 
d4716cf
dadf922
 
 
 
 
 
 
 
 
d4716cf
 
 
 
dadf922
d4716cf
dadf922
d4716cf
 
 
 
dadf922
 
 
d4716cf
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dadf922
 
 
 
 
d4716cf
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
---
license: mit
language:
- en
- ja
---

# NAIST-NICT WMT’23 General MT Task Submission

<!-- Provide a quick summary of what the model is/does. -->

## Model Details

### Model Description

<!-- Provide a longer summary of what this model is. -->

Translation models for submission to WMT'23 English ↔ Japanese general machine translation task.
This repository provides:
- seven models per language direction using various combinations of hyperparameters ( `ckpt/` )
- a datastore per language direction for kNN-MT ( `index/` )

For more details, please see [NAIST-NICT WMT’23 General MT Task Submission](https://aclanthology.org/2023.wmt-1.7/).

- **Developed by:** Hiroyuki Deguchi, Kenji Imamura, Yuto Nishida, Yusuke Sakai, Justin Vasselli, Taro Watanabe.
- **Model type:** Translation model
- **Language pairs:** Japanese-to-English and English-to-Japanese
- **License:** MIT Licence

## How to Get Started with the Model

You can use our models with [fairseq](https://github.com/facebookresearch/fairseq).
```
git clone https://github.com/pytorch/fairseq
cd fairseq
pip install --editable ./
```

### Preprocess
First preprocess the data:
```
DATA_BIN=<path to save preprocessed data>
fairseq-preprocess --source-lang <source language> --target-lang <target language> \
    --testpref <prefix of test text> \
    --destdir ${DATA_BIN} \
    --workers 20
```


### Beam Search
Inference with beam search:
```
fairseq-generate \
    --gen-subset test \
    --task translation \
    --source-lang <source language> \
    --target-lang <target language> \
    --path <path to model> \
    --nbest 50 \
    --beam 50 \
    --max-tokens 1024 \
    --required-batch-size-multiple 1 \
    ${DATA_BIN}/
```

### Ensemble
Inference with model ensembling:
```
MODEL1=<path to model1>
MODEL2=<path to model2>
...
MODEL7=<path to model7>

fairseq-generate \
    --gen-subset test \
    --task translation \
    --source-lang <source language> \
    --target-lang <target language> \
    --path ${MODEL1}:${MODEL2}:${MODEL3}:${MODEL4}:${MODEL5}:${MODEL6}:${MODEL7} \
    --seed 0 \
    --nbest 50 \
    --beam 50 \
    --max-tokens 1024 \
    --required-batch-size-multiple 1 \
    ${DATA_BIN}/
```

### Diversified Decoding (Nucleus Sampling)
Inference with nucleus (top-p) sampling:
```

fairseq-generate \
    --gen-subset test \
    --task translation \
    --source-lang <source language> \
    --target-lang <target language> \
    --seed 0 \
    --path <path to model> \
    --nbest 50 \
    --beam 50 \
    --max-tokens 1024 \
    --sampling \
    --sampling-topp <hyperparameter> \
    --required-batch-size-multiple 1 \
    ${DATA_BIN}/
```

### kNN-MT
#### Concat index files
We uploaded splitted index files.
You can concat files and check md5sum as follows:
```
echo '68b29d7d1483c88b33804828854b28d7' > original.md5 # for English
echo '77ecbd3aaad7f48814f1c4ae95821256' > original.md5 # for Japanese

cat index.ffn_in.l2.bin.part* > index.ffn_in.l2.bin.reconstructed
md5sum index.ffn_in.l2.bin.reconstructed > reconstructed.md5
diff original.md5 reconstructed.md5
```

#### Inference
You can use [knn-seq](https://github.com/naist-nlp/knn-seq).

## Citation

<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->

**BibTeX:**

```
@inproceedings{deguchi-etal-2023-naist,
    title = "{NAIST}-{NICT} {WMT}{'}23 General {MT} Task Submission",
    author = "Deguchi, Hiroyuki  and
      Imamura, Kenji  and
      Nishida, Yuto  and
      Sakai, Yusuke  and
      Vasselli, Justin  and
      Watanabe, Taro",
    editor = "Koehn, Philipp  and
      Haddow, Barry  and
      Kocmi, Tom  and
      Monz, Christof",
    booktitle = "Proceedings of the Eighth Conference on Machine Translation",
    month = dec,
    year = "2023",
    address = "Singapore",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2023.wmt-1.7",
    doi = "10.18653/v1/2023.wmt-1.7",
    pages = "110--118",
    abstract = "In this paper, we describe our NAIST-NICT submission to the WMT{'}23 English ↔ Japanese general machine translation task. Our system generates diverse translation candidates and reranks them using a two-stage reranking system to find the best translation. First, we generated 50 candidates each from 18 translation methods using a variety of techniques to increase the diversity of the translation candidates. We trained seven models per language direction using various combinations of hyperparameters. From these models we used various decoding algorithms, ensembling the models, and using kNN-MT (Khandelwal et al., 2021). We processed the 900 translation candidates through a two-stage reranking system to find the most promising candidate. In the first step, we compared 50 candidates from each translation method using DrNMT (Lee et al., 2021) and returned the candidate with the best score. We ranked the final 18 candidates using COMET-MBR (Fernandes et al., 2022) and returned the best score as the system output. We found that generating diverse translation candidates improved translation quality using the well-designed reranker model.",
}
```