from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("heegyu/TinyMistral-248M-v2.5-Instruct-orpo")
model = AutoModelForCausalLM.from_pretrained("heegyu/TinyMistral-248M-v2.5-Instruct-orpo")

conv = [
  {
    'role': 'user',
    'content': 'What can I do with Large Language Model?'
  }
]
prompt = tokenizer.apply_chat_template(conv, add_generation_prompt=True, return_tensors="pt")
output = model.generate(prompt, max_new_token=128)
print(tokenizer.decode(output[0]))
Downloads last month
10
Safetensors
Model size
248M params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for heegyu/TinyMistral-248M-v2.5-Instruct-orpo

Finetuned
(1)
this model
Quantizations
4 models

Dataset used to train heegyu/TinyMistral-248M-v2.5-Instruct-orpo