Phi3.5-MedicalChat
Collection
4 items
β’
Updated
The MedicalChat-Phi-3.5-mini-instruct- fine-tuned model is designed to simulate doctor-patient conversations, offering medical consultations and suggestions based on patient queries. However, its accuracy may be limited in real-world scenarios, as the training dataset was relatively small.
!pip install unsloth
from unsloth import FastLanguageModel
model, tokenizer = FastLanguageModel.from_pretrained("syubraj/Phi3.5-medicalchat-unsloth",
max_seq_length = 1024,
load_in_4bit = True,
dtype = None
)
user_query = "<Your medical query here>"
system_prompt = """You are a trusted AI-powered medical assistant. Analyze patient queries carefully and provide accurate, professional, and empathetic responses. Prioritize patient safety, adhere to medical best practices, and recommend consulting a healthcare provider when necessary."""
message = [
{"role": "system", "content": system_prompt},
{"role": "human", "content": user_query}
]
# Creating message based on tokenizers chat template
prompt = tokenizer.apply_chat_template(message, tokenize = False, add_generation_prompt = True)
FastLanguageModel.for_inference(model)
# Tokenizing inputs
inputs = tokenizer(prompt, return_tensors = "pt").to("cuda")
# Output Generated
outputs = model.generate(**inputs, max_new_tokens=256, use_cache=True) # Change the `max_new_tokens` according to required objective
tokenizer.batch_decode(outputs)
Step | Training Loss |
---|---|
10 | 2.53 |
20 | 2.20 |
30 | 1.95 |
40 | 2.01 |
50 | 1.97 |
60 | 2.02 |
Base model
microsoft/Phi-3.5-mini-instruct