How to Get Started with the Model

Use the code below to get started with the model.

from transformers import AutoModelForSeq2SeqLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained("Pretam/hindi_sanskrit")
model = AutoModelForSeq2SeqLM.from_pretrained("Pretam/hindi_sanskrit")

article = "इसके लिए साधनों अनुष्ठान तो करना ही चाहिए।"
inputs = tokenizer(article, return_tensors="pt")

translated_tokens = model.generate(
    **inputs, 
    forced_bos_token_id=tokenizer.convert_tokens_to_ids("san_Deva"), 
    max_length=30
)

translation = tokenizer.batch_decode(translated_tokens, skip_special_tokens=True)[0]
print(translation)
Downloads last month
9
Safetensors
Model size
615M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Pretam/hindi_sanskrit

Finetuned
(162)
this model