How to use:
For fine-tuning
Load model directly
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("trieutm/mbart-large-50-many-to-many-mmt-finetuned-vi-to-en")
model = AutoModelForSeq2SeqLM.from_pretrained("trieutm/mbart-large-50-many-to-many-mmt-finetuned-vi-to-en")
For inference
Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("translation", model="trieutm/mbart-large-50-many-to-many-mmt-finetuned-vi-to-en")
- Downloads last month
- 222
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for trieutm/mbart-large-50-many-to-many-mmt-finetuned-vi-to-en
Base model
facebook/mbart-large-50