Update README.md
Browse files
README.md
CHANGED
@@ -6,11 +6,108 @@ tags:
|
|
6 |
- unsloth
|
7 |
- qwen3
|
8 |
- trl
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
9 |
license: apache-2.0
|
10 |
language:
|
11 |
- en
|
12 |
---
|
13 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
14 |
# Uploaded model
|
15 |
|
16 |
- **Developed by:** krishanwalia30
|
|
|
6 |
- unsloth
|
7 |
- qwen3
|
8 |
- trl
|
9 |
+
- qwen-3
|
10 |
+
- fine-tuning
|
11 |
+
- openmathreasoning
|
12 |
+
- python
|
13 |
+
- unsloth
|
14 |
+
- lora
|
15 |
+
- peft
|
16 |
+
- tutorial
|
17 |
+
- reasoning
|
18 |
+
- chat
|
19 |
license: apache-2.0
|
20 |
language:
|
21 |
- en
|
22 |
---
|
23 |
|
24 |
+
# krishanwalia30/Qwen3-16bit-OpenMathReasoning-Finetuned-Merged
|
25 |
+
|
26 |
+
π **Harness the Power of Qwen-3 with Enhanced Reasoning and Chat!** π
|
27 |
+
|
28 |
+
This model is a carefully fine-tuned version of the incredible [Qwen-3-8B](https://huggingface.co/Qwen/Qwen3-8B) using cutting-edge techniques with [Unsloth](https://github.com/unslothai/unsloth) and Parameter-Efficient Fine-Tuning (PEFT) via LoRA. It's designed to bring you the best of both worlds: the strong general capabilities of Qwen-3 with a significant boost in logical reasoning and engaging conversational skills.
|
29 |
+
|
30 |
+
We've taken the already powerful Qwen-3 and further sculpted it using a blend of the [unsloth/OpenMathReasoning-mini](https://huggingface.co/datasets/unsloth/OpenMathReasoning-mini) (Chain-of-Thought split) for advanced problem-solving and the [mlabonne/FineTome-100k](https://huggingface.co/datasets/mlabonne/FineTome-100k) dataset to ensure natural and fluent interactions.
|
31 |
+
|
32 |
+
**π₯ Key Features:**
|
33 |
+
|
34 |
+
* **Enhanced Reasoning:** Excels at tasks requiring logical deduction and step-by-step thinking, thanks to fine-tuning on a dedicated reasoning dataset.
|
35 |
+
* **Improved Chat:** Maintains and enhances the general conversational abilities of Qwen-3, making it great for interactive applications.
|
36 |
+
* **Efficient Fine-Tuning:** Built using the incredibly efficient [Unsloth](https://github.com/unslothai/unsloth) library, resulting in faster training with less memory usage.
|
37 |
+
* **PEFT (LoRA) Inside:** Leverages Low-Rank Adaptation (LoRA) for parameter-efficient fine-tuning, making it easier to adapt to specific tasks without full model retraining.
|
38 |
+
* **Ready to Use:** Seamlessly integrates with the `transformers` library.
|
39 |
+
|
40 |
+
|
41 |
+
**π οΈ How to Get Started:**
|
42 |
+
|
43 |
+
Install the necessary libraries:
|
44 |
+
|
45 |
+
```bash
|
46 |
+
pip install transformers accelerate torch
|
47 |
+
```
|
48 |
+
|
49 |
+
Load and use the model:
|
50 |
+
|
51 |
+
```python
|
52 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
53 |
+
|
54 |
+
model_name = "krishanwalia30/Qwen3-16bit-OpenMathReasoning-Finetuned-Merged"
|
55 |
+
|
56 |
+
tokenizer = AutoTokenizer.from_pretrained(model_name)
|
57 |
+
model = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto", torch_dtype="torch.float16")
|
58 |
+
|
59 |
+
messages = [
|
60 |
+
{"role": "user", "content": "Explain the Pythagorean theorem in simple terms."},
|
61 |
+
{"role": "assistant", "content": "Okay, here's a simple explanation:"},
|
62 |
+
{"role": "user", "content": "Now, solve for the hypotenuse if a=3 and b=4."},
|
63 |
+
]
|
64 |
+
text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
|
65 |
+
|
66 |
+
inputs = tokenizer(text, return_tensors="pt").to(model.device)
|
67 |
+
|
68 |
+
outputs = model.generate(**inputs, max_new_tokens=256, temperature=0.7, top_p=0.8, top_k=20, do_sample=True)
|
69 |
+
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
|
70 |
+
```
|
71 |
+
|
72 |
+
**βοΈ Fine-tuning Details:**
|
73 |
+
|
74 |
+
* **Base Model:** [Qwen-3-8B](https://huggingface.co/Qwen/Qwen3-8B)
|
75 |
+
* **Fine-tuning Framework:** [Unsloth](https://github.com/unslothai/unsloth)
|
76 |
+
* **PEFT Strategy:** LoRA
|
77 |
+
* **Training Datasets:**
|
78 |
+
* [unsloth/OpenMathReasoning-mini](https://huggingface.co/datasets/unsloth/OpenMathReasoning-mini) (COT split)
|
79 |
+
* [mlabonne/FineTome-100k](https://huggingface.co/datasets/mlabonne/FineTome-100k)
|
80 |
+
* **Training Ratio:** Approximately 30% reasoning data and 70% general chat data to balance capabilities.
|
81 |
+
* **Training Infrastructure:** Google Colab with a T4 GPU.
|
82 |
+
* **Quantization during Training:** Likely 4-bit quantization was employed during the fine-tuning process using Unsloth for memory efficiency. The final merged model is saved in 16-bit for broader compatibility.
|
83 |
+
* **Key Hyperparameters:**
|
84 |
+
* `per_device_train_batch_size`: 2
|
85 |
+
* `gradient_accumulation_steps`: 4
|
86 |
+
* `learning_rate`: 2e-4
|
87 |
+
* `max_steps`: 30
|
88 |
+
* Optimizer: `adamw_8bit`
|
89 |
+
* Learning Rate Scheduler: `linear`
|
90 |
+
* Warmup Steps: 5
|
91 |
+
* Weight Decay: 0.01
|
92 |
+
* Seed: 3407
|
93 |
+
|
94 |
+
**π Evaluation:**
|
95 |
+
|
96 |
+
While rigorous quantitative evaluations are ongoing, initial assessments indicate a significant improvement in the model's ability to handle reasoning-based questions while maintaining strong general conversational skills. Further benchmarks and community feedback are welcome!
|
97 |
+
|
98 |
+
**π¨βπ» Author:**
|
99 |
+
|
100 |
+
[https://huggingface.co/krishanwalia30]
|
101 |
+
|
102 |
+
**π Learn More:**
|
103 |
+
|
104 |
+
For a deeper dive into the fine-tuning process and the rationale behind the choices, check out the article: [https://medium.com/@krishanw30/b1a8f684c3f1].
|
105 |
+
|
106 |
+
**π Acknowledgements:**
|
107 |
+
|
108 |
+
A big thank you to the brilliant teams at [Qwen](https://huggingface.co/Qwen), [Unsloth AI](https://github.com/unslothai/unsloth), and the creators of the [OpenMathReasoning-mini](https://huggingface.co/datasets/unsloth/OpenMathReasoning-mini) and [FineTome-100k](https://huggingface.co/datasets/mlabonne/FineTome-100k) datasets for making this project possible!
|
109 |
+
|
110 |
+
|
111 |
# Uploaded model
|
112 |
|
113 |
- **Developed by:** krishanwalia30
|