Model Card for MedQA LLM
This model is fine-tuned on the "keivalya/MedQuad-MedicalQnADataset" to provide accurate answers to medical queries, focusing on a variety of question types including symptoms, diagnosis, prevention, and treatment.
Model Details
Model Description
This model, built on LLaMA 3.2 3B, has been fine-tuned specifically to address question-answering tasks in the medical domain. It aims to assist healthcare providers, researchers, and the general public by offering detailed and accurate responses to queries about medical conditions and treatments.
- Developed by: Ujjwal Mishra
- Model type: Question-Answering on medical data
- Source Model: LLaMA 3.2 3B
Uses
This model is intended for use as a first-line information provider about medical queries. It can support digital health applications, help desks, and educational platforms.
Direct Use
The model can directly answer questions from users about medical issues without any further fine-tuning.
Downstream Use
This model can be further fine-tuned on more specific medical sub-domains or integrated into medical decision-support systems to enhance its utility.
Out-of-Scope Use
The model is not designed to replace professional medical advice or diagnostic activities by certified healthcare providers.
Bias, Risks, and Limitations
Due to the nature of its training data, the model might exhibit biases towards more commonly represented diseases or conditions. It may not perform equally well on rare conditions or non-English queries.
Recommendations
Users should verify the information provided by the model with up-to-date and peer-reviewed medical sources or professionals. The model should be continuously monitored and updated to mitigate biases and adapt to new medical knowledge.
How to Get Started with the Model
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
from accelerate import Accelerator
# Initialize the Accelerator for mixed precision and faster inference (if supported by your hardware)
accelerator = Accelerator()
# Load your fine-tuned model and tokenizer
model_name = "ujjman/llama-3.2-3B-Medical-QnA-unsloth"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
# Place model and tokenizer on the appropriate device
model, tokenizer = accelerator.prepare(model, tokenizer)
# Create a text generation pipeline
generator = pipeline('text-generation', model=model, tokenizer=tokenizer)
# Function to ask a medical question
def ask_question(question_type, question):
prompt = f"""Below is a Question Type that describes the type of question, paired with a question that asks a question based on medical science. Give an answer that correctly answers the question.
### Question Type:
{question_type}
### Question:
{question}
### Answer:
"""
# Adjust max_length and specify eos_token_id for better stopping
eos_token_id = tokenizer.eos_token_id
response = generator(prompt, max_length=1024, eos_token_id=eos_token_id, num_return_sequences=1)
answer = response[0]['generated_text'][len(prompt):]
return answer.strip()
# Example usage
question_type = "prevention"
question = "How can I protect myself from poisoning caused by marine toxins?"
print(ask_question(question_type, question))
- Downloads last month
- 5
Model tree for ujjman/llama-3.2-3B-Medical-QnA-unsloth
Base model
meta-llama/Llama-3.2-3B