Uploaded model

  • Developed by: exillarml
  • License: apache-2.0
  • Finetuned from model : unsloth/mistral-7b-bnb-4bit

This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("exillarml/fine_tuned_mistral_7b_dental_8_epoch_chatstyle_ml")

tokenizer = AutoTokenizer.from_pretrained("exillarml/fine_tuned_mistral_7b_dental_8_epoch_chatstyle_ml")

inputs = tokenizer("What causes gum bleeding?", return_tensors="pt")

outputs = model.generate(**inputs, max_new_tokens=50)

print(tokenizer.decode(outputs[0], skip_special_tokens=True))

Downloads last month
44
Safetensors
Model size
7.25B params
Tensor type
BF16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for exillarml/mistral-7b-finetuned-merged

Finetuned
(537)
this model