khalednabawi11's picture
Update README.md
46b3045 verified
---
language: en
tags:
- medical
- llama
- unsloth
- qlora
- finetuned
- chatbot
license: apache-2.0
datasets:
- custom-medical-qa
base_model: ContactDoctor/Bio-Medical-Llama-3-8B
model_creator: khalednabawi11
library_name: transformers
pipeline_tag: text-generation
---
# Bio-Medical LLaMA 3 8B - Fine-Tuned
πŸš€ **Fine-tuned version of [ContactDoctor/Bio-Medical-Llama-3-8B](https://huggingface.co/ContactDoctor/Bio-Medical-Llama-3-8B) using Unsloth for enhanced medical Q&A capabilities.**
## πŸ“Œ Model Details
- **Model Name:** Bio-Medical LLaMA 3 8B - Fine-Tuned
- **Base Model:** ContactDoctor/Bio-Medical-Llama-3-8B
- **Fine-Tuning Method:** QLoRA with Unsloth
- **Domain:** Medical Question Answering
- **Dataset:** Medical Q&A dataset (MQA.json)
## πŸ› οΈ Training Configuration
- **Epochs:** 4
- **Batch Size:** 2
- **Gradient Accumulation:** 4
- **Learning Rate:** 2e-4
- **Optimizer:** AdamW (8-bit)
- **Weight Decay:** 0.01
- **Warmup Steps:** 50
## πŸ”§ LoRA Parameters
- **LoRA Rank (r):** 16
- **LoRA Alpha:** 16
- **LoRA Dropout:** 0
- **Bias:** None
- **Target Layers:**
- q_proj
- k_proj
- v_proj
- o_proj
- gate_proj
- up_proj
- down_proj
- **Gradient Checkpointing:** Enabled (Unsloth)
- **Random Seed:** 3407
## πŸš€ Model Capabilities
- Optimized for **low-memory inference**
- Supports **long medical queries**
- Efficient **parameter-efficient tuning (LoRA)**
## πŸ“Š Usage
This model is suitable for **medical question answering**, **clinical chatbot applications**, and **biomedical research assistance**.
## πŸ”— References
- [Unsloth Documentation](https://github.com/unslothai/unsloth)
- [Hugging Face Transformers](https://huggingface.co/docs/transformers/index)
---
πŸ’‘ **Contributions & Feedback**: Open to collaboration! Feel free to reach out.