MultiSlav MultiSlav-5lang

MLR @ Allegro.com

Multilingual Many2Many MT Model

MultiSlav-5lang is an Encoder-Decoder vanilla transformer model trained on sentence-level Machine Translation task. Model is supporting translation between 5 languages: Czech, English, Polish, Slovak, Slovene. This model is part of the MultiSlav collection. More information will be available soon in our upcoming MultiSlav paper.

Experiments were conducted under research project by Machine Learning Research lab for Allegro.com. Big thanks to laniqo.com for cooperation in the research.

MultiSlav-5lang - translates directly between all supported languages using single Many2Many model as seen on the diagram above.

Model description

  • Model name: multislav-5lang
  • Source Languages: Czech, English, Polish, Slovak, Slovene
  • Target Languages: Czech, English, Polish, Slovak, Slovene
  • Model Collection: MultiSlav
  • Model type: MarianMTModel Encoder-Decoder
  • License: CC BY 4.0 (commercial use allowed)
  • Developed by: MLR @ Allegro & Laniqo.com

Supported languages

Using model you must specify target language for translation. Target language tokens are represented as 3-letter ISO 639-3 language codes embedded in a format >>xxx<<. All accepted directions and their respective tokens are listed below. Each of them was added as a special token to Sentence-Piece tokenizer.

Target Language First token
Czech >>ces<<
English >>eng<<
Polish >>pol<<
Slovak >>slk<<
Slovene >>slv<<

Use case quickstart

Example code-snippet to use model. Due to bug the MarianMTModel must be used explicitly.

from transformers import AutoTokenizer, MarianMTModel

model_name = "Allegro/MultiSlav-5lang"

tokenizer = AutoTokenizer.from_pretrained(model_name)
model = MarianMTModel.from_pretrained(model_name)

text = "Allegro to internetowa platforma e-commerce, na której swoje produkty sprzedają średnie i małe firmy, jak również duże marki."
target_languages = ["ces", "eng", "slk", "slv"]
batch_to_translate = [
    f">>{lang}<<" + " " + text for lang in target_languages
]

translations = model.generate(**tokenizer.batch_encode_plus(batch_to_translate, return_tensors="pt"))
decoded_translations = tokenizer.batch_decode(translations, skip_special_tokens=True, clean_up_tokenization_spaces=True)
for trans in decoded_translations:
    print(trans)

Generated outputs:

Czech output:

Allegro je on-line e-commerce platforma, na které své produkty prodávají střední a malé firmy, stejně jako velké značky.

English output:

Allegro is an online e-commerce platform on which medium and small companies as well as large brands sell their products.

Slovak output:

Allegro je internetová e-commerce platforma, na ktorej svoje produkty predávajú stredné a malé podniky, ako aj veľké značky.

Slovene output:

Allegro je spletna platforma za e-poslovanje, na kateri srednje velika in mala podjetja ter velike blagovne znamke prodajajo svoje izdelke.

The model is also capable of translating into Polish language, following the same pattern:

text = ">>pol<<" + " " + "Allegro is an online e-commerce platform on which medium and small companies as well as large brands sell their products."
translation = model.generate(**tokenizer.batch_encode_plus([text], return_tensors="pt"))
decoded_translation = tokenizer.batch_decode(translation, skip_special_tokens=True, clean_up_tokenization_spaces=True)

print(decoded_translation[0])

Generated Polish output:

Allegro to internetowa platforma e-commerce, na której sprzedają swoje produkty średnie i małe firmy, a także duże marki.

Training

SentencePiece tokenizer has a vocab size 80k in total (16k per language). Tokenizer was trained on randomly sampled part of the training corpus. During the training we used the MarianNMT framework. Base marian configuration used: transfromer-big. All training parameters are listed in table below.

Training hyperparameters:

Hyperparameter Value
Total Parameter Size 258M
Training Examples 578M
Vocab Size 80k
Base Parameters Marian transfromer-big
Number of Encoding Layers 6
Number of Decoding Layers 6
Model Dimension 1024
FF Dimension 4096
Heads 16
Dropout 0.1
Batch Size mini batch fit to VRAM
Training Accelerators 4x A100 40GB
Max Length 100 tokens
Optimizer Adam
Warmup steps 8000
Context Sentence-level MT
Source Languages Supported Czech, English, Polish, Slovak, Slovene
Target Languages Supported Czech, English, Polish, Slovak, Slovene
Precision float16
Validation Freq 3000 steps
Stop Metric ChrF
Stop Criterion 20 Validation steps

Training corpora

The main research question was: "How does adding additional, related languages impact the quality of the model?" - we explored it in the Slavic language family. In this model we experimented by additionally adding English <-> Slavic parallel corpora to further increase open-source data-regime. We found that additional data clearly improved performance compared to the bi-directional baseline models, and compared to pivot models and MultiSlav-4slav in the most of the directions. For example in translation from Polish to Czech, this allowed us to expand training data-size from 63M to 578M examples, and from 18M to 578M for Slovak to Slovene translation.

We only used explicitly open-source data to ensure open-source license of our model. Datasets were downloaded via MT-Data library. Number of total examples post filtering and deduplication: 578M.

The datasets used and data amount prior to filtering and deduplication:

Corpus Data Size
paracrawl 246407901
opensubtitles 167583218
multiparacrawl 52388826
dgt 36403859
elrc 29687222
xlent 18375223
wikititles 12936394
wmt 11074816
wikimatrix 10435588
dcep 10239150
ELRC 7609067
tildemodel 6309369
europarl 6088362
eesc 5604672
eubookshop 3732718
emea 3482661
jrc_acquis 2920805
ema 1881408
qed 1835208
elitr_eca 1398536
EU-dcep 1132950
rapid 1016905
ecb 885442
kde4 541944
news_commentary 498432
kde 473269
bible_uedin 429692
europat 358911
elra 357696
wikipedia 352118
wikimedia 201088
tatoeba 91251
globalvoices 69736
euconst 65507
ubuntu 47301
php 44031
ecdc 21154
eac 20224
eac_reference 10099
gnome 4466
EU-eac 2925
books 2816
EU-ecdc 2210
newsdev 1953
khresmoi_summary 889
czechtourism 832
khresmoi_summary_dev 455
worldbank 189

Evaluation

Evaluation of the models was performed on Flores200 dataset. The table below compares performance of the open-source models and all applicable models from our collection. Metrics BLEU, ChrF2, and Unbabel/wmt22-comet-da.

Translation results on translation from Polish to Czech (Slavic direction with the highest data-regime):

Model Comet22 BLEU ChrF Model Size
M2M−100 89.6 19.8 47.7 1.2B
NLLB−200 89.4 19.2 46.7 1.3B
Opus Sla-Sla 82.9 14.6 42.6 64M
BiDi-ces-pol (baseline) 90.0 20.3 48.5 209M
P4-pol 90.2 20.2 48.5 2x 242M
P5-eng 89.0 19.9 48.3 2x 258M
P5-ces 90.3 20.2 48.6 2x 258M
MultiSlav-4slav 90.2 20.6 48.7 242M
MultiSlav-5lang * 90.4 20.7 48.9 258M

Translation results on translation from Slovak to Slovene (Slavic direction with the lowest data-regime):

Model Comet22 BLEU ChrF Model Size
M2M−100 89.6 26.6 55.0 1.2B
NLLB−200 88.8 23.3 42.0 1.3B
BiDi-slk-slv (baseline) 89.4 26.6 55.4 209M
P4-pol 88.4 24.8 53.2 2x 242M
P5-eng 88.5 25.6 54.6 2x 258M
P5-ces 89.8 26.6 55.3 2x 258M
MultiSlav-4slav 90.1 27.1 55.7 242M
MultiSlav-5lang * 90.2 27.1 55.7 258M

* this model

system of 2 models Many2XXX and XXX2Many, see P5-ces2many

Limitations and Biases

We did not evaluate inherent bias contained in training datasets. It is advised to validate bias of our models in perspective domain. This might be especially problematic in translation from English to Slavic languages, which require explicitly indicated gender and might hallucinate based on bias present in training data.

License

The model is licensed under CC BY 4.0, which allows for commercial use.

Citation

TO BE UPDATED SOON 🤗

Contact Options

Authors:

Please don't hesitate to contact authors if you have any questions or suggestions:

Downloads last month
10
Safetensors
Model size
258M params
Tensor type
F32
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Collection including allegro/multislav-5lang