File size: 1,908 Bytes
46b3045
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
fcac177
f03727b
fcac177
f03727b
fcac177
f03727b
fcac177
 
 
 
 
f03727b
fcac177
f03727b
fcac177
 
 
 
 
 
 
f03727b
fcac177
f03727b
fcac177
 
 
 
 
 
 
 
 
 
 
 
 
 
f03727b
fcac177
f03727b
fcac177
 
 
f03727b
fcac177
f03727b
fcac177
f03727b
fcac177
f03727b
fcac177
 
f03727b
fcac177
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
---
language: en
tags:
  - medical
  - llama
  - unsloth
  - qlora
  - finetuned
  - chatbot
license: apache-2.0
datasets:
  - custom-medical-qa
base_model: ContactDoctor/Bio-Medical-Llama-3-8B
model_creator: khalednabawi11
library_name: transformers
pipeline_tag: text-generation
---


# Bio-Medical LLaMA 3 8B - Fine-Tuned  

πŸš€ **Fine-tuned version of [ContactDoctor/Bio-Medical-Llama-3-8B](https://huggingface.co/ContactDoctor/Bio-Medical-Llama-3-8B) using Unsloth for enhanced medical Q&A capabilities.**  

## πŸ“Œ Model Details  

- **Model Name:** Bio-Medical LLaMA 3 8B - Fine-Tuned  
- **Base Model:** ContactDoctor/Bio-Medical-Llama-3-8B  
- **Fine-Tuning Method:** QLoRA with Unsloth  
- **Domain:** Medical Question Answering  
- **Dataset:** Medical Q&A dataset (MQA.json)  

## πŸ› οΈ Training Configuration  

- **Epochs:** 4  
- **Batch Size:** 2  
- **Gradient Accumulation:** 4  
- **Learning Rate:** 2e-4  
- **Optimizer:** AdamW (8-bit)  
- **Weight Decay:** 0.01  
- **Warmup Steps:** 50  

## πŸ”§ LoRA Parameters  

- **LoRA Rank (r):** 16  
- **LoRA Alpha:** 16  
- **LoRA Dropout:** 0  
- **Bias:** None  
- **Target Layers:**  
  - q_proj  
  - k_proj  
  - v_proj  
  - o_proj  
  - gate_proj  
  - up_proj  
  - down_proj  
- **Gradient Checkpointing:** Enabled (Unsloth)  
- **Random Seed:** 3407  

## πŸš€ Model Capabilities  

- Optimized for **low-memory inference**  
- Supports **long medical queries**  
- Efficient **parameter-efficient tuning (LoRA)**  

## πŸ“Š Usage  

This model is suitable for **medical question answering**, **clinical chatbot applications**, and **biomedical research assistance**.  

## πŸ”— References  

- [Unsloth Documentation](https://github.com/unslothai/unsloth)  
- [Hugging Face Transformers](https://huggingface.co/docs/transformers/index)  

---
πŸ’‘ **Contributions & Feedback**: Open to collaboration! Feel free to reach out.