Model Card for SmolLM2-360M-MedReason
Model Details
This model is a fine-tuned version of SmolLM2-360M-Instruct on a medical reasoning dataset; UCSC-VLAA/MedReason
Model Description
- Developed by: Rustam Shiriyev
- Model type: Instruction-tuned model
- Language(s) (NLP): English
- License: MIT
- Finetuned from model: unsloth/SmolLM2-360M-Instruct
How to Get Started with the Model
from peft import PeftModel
from huggingface_hub import login
from transformers import AutoTokenizer
login(token="")
tokenizer = AutoTokenizer.from_pretrained("unsloth/SmolLM2-360M-MedReason",)
base_model = AutoModelForCausalLM.from_pretrained(
"unsloth/SmolLM2-360M-MedReason",
device_map={"": 0}, token=""
)
model = PeftModel.from_pretrained(base_model,"Rustamshry/SmolLM2-360M-MedReason")
question = "Which of the following nipple discharge is most probably physiological?"
options = """Answer Choices:
A. B/L spontaneous discharge
B. B/L milky discharge with squeezing from multiple ducts
C. U/L bloody discharge
D. U/L bloody discharge with squeezing from a single duct"""
prompt = f"""### Question:\n{question}\n{options}\n\n### Response:\n"""
input_ids = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(
**input_ids,
max_new_tokens=2000,
#temperature=0.6,
#top_p=0.95,
#do_sample=True,
#eos_token_id=tokenizer.eos_token_id
)
print(tokenizer.decode(outputs[0]),skip_special_tokens=True)
Framework versions
- PEFT 0.14.0
- Downloads last month
- 2
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for Rustamshry/SmolLM2-360M-MedReason
Base model
HuggingFaceTB/SmolLM2-360M
Quantized
HuggingFaceTB/SmolLM2-360M-Instruct
Finetuned
unsloth/SmolLM2-360M-Instruct