How to use:

For fine-tuning

Load model directly

from transformers import AutoTokenizer, AutoModelForSeq2SeqLM

tokenizer = AutoTokenizer.from_pretrained("trieutm/mbart-large-50-many-to-many-mmt-finetuned-vi-to-en")
model = AutoModelForSeq2SeqLM.from_pretrained("trieutm/mbart-large-50-many-to-many-mmt-finetuned-vi-to-en")

For inference

Use a pipeline as a high-level helper

from transformers import pipeline

pipe = pipeline("translation", model="trieutm/mbart-large-50-many-to-many-mmt-finetuned-vi-to-en")
Downloads last month
222
Safetensors
Model size
611M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for trieutm/mbart-large-50-many-to-many-mmt-finetuned-vi-to-en

Finetuned
(233)
this model

Dataset used to train trieutm/mbart-large-50-many-to-many-mmt-finetuned-vi-to-en