File size: 6,169 Bytes
adcac77 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 |
---
license: apache-2.0
datasets:
- amphora/QwQ-LongCoT-130K
language:
- en
metrics:
- perplexity
base_model:
- Qwen/Qwen2.5-0.5B-Instruct
---
## Model Details:
- **Base Model:** Qwen/Qwen2-0.5B-Instruct
- **Teacher Model:** Qwen/QwQ-32B-Preview
- **Distillation Framework:** Instruction Tuning
- **Task Type:** Conversational AI / Causal Language Modeling
- **Parameters:** 0.5B
- **Special Features:**
- Integrated gradient checkpointing for efficient training
- Step-by-step reasoning capabilities for better problem-solving
---
## Training:
QwQ-0.5B-Distilled was trained using the **QwQ-LongCoT-130K dataset**, a carefully curated collection of long-context examples designed for reasoning and conversational AI tasks. The GKD framework ensures that the student model mimics the teacher modelβs outputs, aligning its predictions with high-quality responses.
### Training Progress:
[ββββββββββ] 100%
### Training Script:
```python
import os
import argparse
import torch
from datasets import Dataset
from trl import SFTConfig, SFTTrainer, DataCollatorForCompletionOnlyLM
from transformers import (
AutoModelForCausalLM,
AutoTokenizer,
)
from datasets import load_dataset
from peft import LoraConfig
parser = argparse.ArgumentParser()
parser.add_argument("--max_length", type=int, default = 4096)
parser.add_argument("--output_dir", type=str, default="gkd-model")
parser.add_argument("--per_device_train_batch_size", type=int, default=1)
parser.add_argument("--gradient_accumulation_steps", type=int, default=16)
parser.add_argument("--gradient_checkpointing", action="store_true", default=False)
parser.add_argument("--resume_from_checkpoint", action="store_true", default=False)
parser.add_argument("--lora", action="store_true")
args = parser.parse_args()
qwq_dataset = load_dataset("amphora/QwQ-LongCoT-130K", split = "train")
messages = []
for each in qwq_dataset:
msg = [
{"role": "system", "content": "You are a helpful and harmless assistant. You are Qwen developed by Alibaba. You should think step-by-step."},
{"role": "user", "content": each["problem"]},
{"role": "assistant", "content": each["qwq"]},
]
messages.append(msg)
TRAIN_SPLIT_RATIO = 0.9
train_size = int(TRAIN_SPLIT_RATIO * len(messages))
eval_size = len(messages) - train_size
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen2-0.5B-Instruct")
# The model to optimise
model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen2-0.5B-Instruct", torch_dtype=torch.bfloat16, device_map="auto")
### Real Dataset
train_dataset = Dataset.from_dict({"messages":messages[:train_size]})
eval_dataset = Dataset.from_dict({"messages":messages[train_size:]})
training_args = SFTConfig(
output_dir=args.output_dir,
max_seq_length=args.max_length,
per_device_train_batch_size=args.per_device_train_batch_size,
gradient_accumulation_steps=args.gradient_accumulation_steps,
gradient_checkpointing = args.gradient_checkpointing,
save_steps = 100,
save_total_limit = 5
)
lora_config = LoraConfig(
r=16,
lora_alpha=32,
lora_dropout=0.05,
bias="none",
task_type="CAUSAL_LM",
)
response_template = "<|im_start|>assistant\n"
collator = DataCollatorForCompletionOnlyLM(response_template, tokenizer=tokenizer)
trainer = SFTTrainer(
model=model,
args=training_args,
processing_class=tokenizer,
train_dataset=train_dataset,
eval_dataset=eval_dataset,
peft_config=lora_config if args.lora else None,
data_collator=collator,
)
trainer.train(resume_from_checkpoint=args.resume_from_checkpoint)
```
### Dataset:
- **Source:** `amphora/QwQ-LongCoT-130K`
- **Split:** 90% Training, 10% Evaluation
---
## Example Usage:
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
# Model name
model_name = "kz919/QwQ-0.5B-Distilled-SFT"
# Load the model
print(f"Starting to load the model {model_name} into memory")
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.bfloat16,
device_map={"": 0}
)
# Load the tokenizer
tokenizer = AutoTokenizer.from_pretrained(model_name)
# Define the prompt
prompt = "How many r in strawberry."
messages = [
{"role": "system", "content": "You are a helpful and harmless assistant. You are Qwen developed by Alibaba. You should think step-by-step."},
{"role": "user", "content": prompt}
]
# Tokenize the input
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
# Generate a response
generated_ids = model.generate(
**model_inputs,
max_new_tokens=4096
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
# Decode the response
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)
```
---
## Applications:
1. **Conversational Assistants:**
Suitable for AI chatbots that require reasoning and long-context understanding.
2. **Educational Tools:**
Provides step-by-step explanations, making it ideal for learning environments.
3. **Creative Writing:**
Assists in generating coherent, contextually aware long-form content.
4. **Technical Support:**
Handles complex customer queries with precision and clarity.
---
## Limitations:
- While distilled for efficiency, performance on highly complex reasoning tasks may slightly trail the teacher model.
- Warning π¨π¨π¨: This model is not fully trained, merely a proof of concept. Don't yell at me if it's outputing nonesense.
---
## Citation:
If you use this model in your research or applications, please cite it as:
```bibtex
@model{qwq_0.5B_distilled,
author = {Kaizhao Liang},
title = {QwQ-0.5B-Distilled: A Reasoning Model for Edge Devices},
year = {2024},
publisher = {Hugging Face},
version = {1.0}
}
```
---
This model is an example of how efficient fine-tuning and distillation methods can deliver robust conversational AI capabilities in a smaller, more manageable footprint.
|