File size: 3,400 Bytes
4072919
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3eb5ed0
4072919
 
 
4aa93f8
4072919
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
---
license: mit
language:
- en
base_model:
- Qwen/Qwen2.5-3B-Instruct
---
# Qwen2.5-3B-Instruct Fine-Tuned Model

## πŸ“Œ Model Overview
This repository contains a fine-tuned version of **Qwen2.5-3B-Instruct** using Unsloth. The model is optimized for **multi-hop reasoning, scientific Q&A, and retrieval-augmented generation (RAG)** with FAISS and BM25 retrieval.

- **Base Model**: [Qwen2.5-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-3B-Instruct)
- **Fine-Tuning Framework**: Unsloth
- **Quantization**: 4-bit GGUF & 16-bit versions available
- **Training Methods**: SFT (Supervised Fine-Tuning) + ORPO (Offline Reward Preference Optimization)

---
## πŸ”₯ Fine-Tuning Details
### **1️⃣ Datasets Used**
- **HotpotQA**: Multi-hop reasoning dataset
- **Synthetic QA**: Created using extracted document chunks
- **BM25 & FAISS Retrieval**: Used to retrieve relevant documents

### **2️⃣ Training Configuration**
- **LoRA Fine-Tuning**: PEFT with Unsloth
- **Hyperparameters**:
  - `r=16, lora_alpha=16, lora_dropout=0`
  - `gradient_accumulation_steps=4`
  - `max_seq_length=2048`
  - `learning_rate=2e-4`
  - `max_steps=200`
  - `optimizer=adamw_8bit`
  
- **RL Fine-Tuning** (ORPO): Used for improving reasoning performance

---
## πŸ“ Files Included
- `pytorch_model-00001-of-00002.bin` - Model weights
- `pytorch_model-00002-of-00002.bin`
- `pytorch_model.bin.index.json` - Index of model checkpoints
- `config.json` - Model configuration
- `tokenizer.json` - Tokenizer configuration
- `tokenizer_config.json`
- `merges.txt` - BPE merge rules
- `vocab.json` - Token vocabulary
- `special_tokens_map.json`
- `generation_config.json` - Default generation settings
- `unsloth.Q4_K_M.gguf` - **Quantized 4-bit version** for Llama-CPP
- `unsloth.F16.gguf` - **16-bit version** for full precision inference

---
## πŸš€ Model Usage
### **Load Model in Python**
```python
from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "HasinduNimesh/qwen3b-finetuned"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto")

input_text = "Why is it necessary to filter out chain-of-thought outputs with mixed languages, long paragraphs, and code blocks?"
inputs = tokenizer(input_text, return_tensors="pt").to(model.device)
output = model.generate(**inputs, max_length=256)
print(tokenizer.decode(output[0], skip_special_tokens=True))
```

### **Use with Llama-CPP (4-bit GGUF)**
```python
from llama_cpp import Llama

llm = Llama(model_path="unsloth.Q4_K_M.gguf", n_ctx=2048)
prompt = "Summarize the latest research on AI safety."
output = llm(prompt, max_tokens=200)
print(output["choices"][0]["text"])
```

---
## πŸ›  Future Improvements
- **Improve dataset diversity**: Add more diverse reasoning datasets
- **Optimize retrieval**: Enhance FAISS & BM25 hybrid retrieval
- **Expand RL fine-tuning**: Improve reward models for ORPO

---
## πŸ›‘οΈ License
This model is available under the **Apache 2.0 License**. Please follow [Hugging Face’s guidelines](https://huggingface.co/docs/hub/models-the-hub) for responsible AI usage.

---
## 🀝 Acknowledgements
- **Unsloth**: For efficient Qwen fine-tuning
- **Hugging Face**: Model hosting & dataset tools
- **DeepSeek & Qwen Teams**: For providing base models

---
_πŸ“’ For issues or improvements, please open a discussion on Hugging Face!_ πŸš€