Model Card for Model ID
Model Summary
This model is a fine-tuned version of facebook/mbart-large-50-many-to-many-mmt
on the GECTurk-generation dataset for the task of Turkish grammar correction. It takes Turkish sentences with grammatical mistakes as input and generates grammatically corrected Turkish text.
The model can be used for educational tools, writing assistants, or any NLP application that benefits from clean and correct Turkish grammar.
It supports the "tr_TR"
language code for both input and output, and works without needing task-specific prefixes.
How to Get Started with the Model
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("yeniguno/mbart50-turkish-grammar-corrector")
model = AutoModelForSeq2SeqLM.from_pretrained("yeniguno/mbart50-turkish-grammar-corrector")
def correct_turkish(text):
tokenizer.src_lang = "tr_TR"
encoded = tokenizer(text, return_tensors="pt", max_length=128, truncation=True)
input_ids = encoded["input_ids"].to(model.device)
generated_ids = model.generate(
input_ids,
max_length=128,
num_beams=4,
forced_bos_token_id=tokenizer.lang_code_to_id["tr_TR"]
)
return tokenizer.decode(generated_ids[0], skip_special_tokens=True)
print(correct_turkish("Alide geldi.")) # Ali de geldi.
Training Details
The model was trained using Hugging Face Transformers' Seq2SeqTrainer
.
- Downloads last month
- 4
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for yeniguno/mbart50-turkish-grammar-corrector
Base model
facebook/mbart-large-50-many-to-many-mmt