🧠 Qwen-Distilled-Scout-1.5B-Instruct-Gen2
This model is a fine-tuned version of deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B
, enhanced with instruction-tuned chain-of-thought (CoT) reasoning across three problem domains: math, text-to-SQL, medical-reasoning, and Python programming.
Fine-tuning was conducted using DeepSpeed on a multi-A100 GPU setup via RunPod for efficient training in memory-constrained environments. The training dataset includes CoT-formatted tasks with natural language questions and structured reasoning paths.
Inference notebook is publicly available here.
📎 Model Details
- Base Model:
eagle0504/qwen-distilled-scout-1.5b-instruct-gen1
- Language: English
- Architecture: Causal Language Model (Decoder-only)
- Tokenizer: AutoTokenizer from base model
- Parameter Count: 1.5 Billion
- Training Framework: 🧢 Transformers + DeepSpeed
- Compute Environment: RunPod (6x A100 SXM, 192 vCPU, 1.5TB RAM)
🧪 Training Dataset
Datasets Used:
gretelai/synthetic_text_to_sql
eagle0504/openai-gsm8k-enhanced-using-together-ai-deepseek-train8k-test1k-v1
eagle0504/augmented_codealpaca-20k-using-together-ai-deepseek-v1
FreedomIntelligence/medical-o1-reasoning-SFT
Each example in the dataset follows the structure:
<instruction>This is a [math/SQL/Python/medical] problem.</instruction>
<question>...</question>
<think>...</think>
<response>...</response>
This instruction format ensures that the model understands the task type explicitly and applies step-by-step reasoning across all domains.
📊 Fine-Tuning Summary
The base model deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B
was fine-tuned on three different datasets using DeepSpeed across various RunPod infrastructure setups. Below is a consolidated summary of the training configurations and results:
Model ID | Dataset Description | GPUs | vCPUs | RAM (GB) | Disk per GPU | Container Image | Duration | Cost | Total Cost | DeepSpeed Stage | Precision | Mean Token Accuracy |
---|---|---|---|---|---|---|---|---|---|---|---|---|
eagle0504/openai-gsm8k-enhanced-using-together-ai-deepseek-train8k-test1k-v1 |
OpenAI GSM8K Enhanced v2 | 6 × H100 PCIe | 144 | 1132 | 20 GB | runpod/pytorch:2.1.0-py3.10-cuda11.8.0-devel-ubuntu22.04 |
3 hrs | ~$14 | ~$42 | Stage 1 | FP16 | 98% |
eagle0504/augmented_codealpaca-20k-using-together-ai-deepseek-v1 |
GSM8K + CodeAlpaca-20K Enhanced | 4 × A100 SXM | 146 | 1144 | 20 GB | runpod/pytorch:2.1.0-py3.10-cuda11.8.0-devel-ubuntu22.04 |
3 hrs | ~$7+ | ~$21+ | Stage 1 | FP16 | 98% |
gretelai/synthetic_text_to_sql |
Custom CoT + SQL-Reasoning | 6 × A100 SXM | 192 | 1536 | 20 GB | runpod/pytorch:2.1.0-py3.10-cuda11.8.0-devel-ubuntu22.04 |
2.5 hrs | ~$21 | ~$52.5 | Stage 2 | FP16 | 97% |
FreedomIntelligence/medical-o1-reasoning-SFT |
CoT + Medical-Reasoning | 4 x A100 SXM | 146 | 1144 | 20 GB | runpod/pytorch:2.1.0-py3.10-cuda11.8.0-devel-ubuntu22.04 |
17 hrs | ~$7+ | ~$119 | Stage 2 | FP16 | 99% |
🏗️ Training Configuration
Training was performed with the following configuration:
- Batch Size: 2 (with gradient accumulation steps = 4)
- Epochs: 15
- Max Length: 1024 tokens
- Optimizer: AdamW
- Learning Rate: 5e-5 (with warmup + linear decay)
- Precision: FP16
- DeepSpeed Config:
- Zero Redundancy Optimizer Stage 2
- Gradient Clipping: 1.0
- AllGather + ReduceScatter optimization
- Checkpoint Saving: Disabled to minimize disk usage
🧶 Evaluation Metric
The model is evaluated with a custom token-level accuracy metric:
- Metric: Mean token-level accuracy
- Definition: Accuracy over all non-masked tokens (
labels != -100
) - Implementation: NumPy-based vectorized comparison between predicted tokens and ground truth
🚀 Use Case
This model is tuned for instruction-driven chain-of-thought generation, and is especially useful in:
- Educational tools for logical reasoning and coding
- Auto SQL and code generation for tabular or structured systems
- Teaching agents in math, database, and programming domains
- Conversational agents requiring task-specific structured outputs
- Medical diagnosis reasoning
📦 How to Use
from transformers import StoppingCriteria, StoppingCriteriaList
import torch
class StopOnTokens(StoppingCriteria):
def __init__(self, stop_token_ids: list):
super().__init__()
self.stop_token_ids = stop_token_ids
def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor, **kwargs) -> bool:
return any(input_ids[0, -len(token):].tolist() == token for token in self.stop_token_ids)
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("eagle0504/qwen-distilled-scout-1.5b-instruct-gen1")
tokenizer = AutoTokenizer.from_pretrained("eagle0504/qwen-distilled-scout-1.5b-instruct-gen1")
stop_sequence = "</response>"
stop_ids = tokenizer.encode(stop_sequence, add_special_tokens=False)
stopping_criteria = StoppingCriteriaList([StopOnTokens([stop_ids])])
# Choose amongst math, SQL, python, or medical in the instruction.
prompt = (
"<instruction>This is a [math, SQL, python, medical] problem.</instruction>"
"<question>Natalia sold clips to 48 of her friends in April, and then she sold half as many clips in May. How many clips did Natalia sell altogether?</question>"
)
inputs = tokenizer(
prompt,
return_tensors="pt"
)
outputs = model.generate(
**inputs,
max_new_tokens=1024, # use max token limit and this may not be needed because stop word is set up above
stopping_criteria=stopping_criteria # stop word is in place so we may not need all 1024 tokens
)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
📊 Limitations
- The model is specialized for instruction-following tasks in math, SQL, medical and Python reasoning. It may require further fine-tuning to generalize to open-domain dialogue or creative generation.
- Input length is capped at 1024 tokens, beyond which content will be truncated.
🧑💻 Author
- Name: Yiqiao Yin
- Hugging Face: eagle0504
📝 Citation
@misc{yin2025instructgen1,
title={Instruction-Tuned Qwen 1.5B Fine-tuned on Math + SQL + Python + medical CoT Tasks},
author={Yiqiao Yin},
year={2025},
howpublished={\url{https://huggingface.co/eagle0504/qwen-distilled-scout-1.5b-instruct-gen2}},
}
- Downloads last month
- 33