Model Card for DeryFerd/Qwen-Math-Code-Distill-Phi-2

Model Details

Model Description

UPDATE: This model is a fine-tuned, versatile version of microsoft/phi-2, adapted for both Python code generation and step-by-step mathematical reasoning. The goal of this project was to distill the capabilities of larger "teacher" models (Qwen2.5-Coder-7B-Instruct for coding and Qwen2.5-Math-7B-Instruct for math) into the compact and efficient Phi-2 architecture.

The model was trained on a combined dataset of Python programming problems (from MBPP and opc-sft-stage2) and grade-school math word problems (from GSM8K and MATH). It is designed to generate not just answers, but also the thought process behind them, mimicking the style of its teachers.

  • Developed by: DeryFerd
  • Model type: Causal Language Model
  • Language(s) (NLP): English
  • License: MIT
  • Finetuned from model: microsoft/phi-2

Model Sources

Uses

Direct Use

This model is intended for direct use in generating Python functions from natural language and solving math word problems with step-by-step explanations. It can be used as a coding/math assistant, for educational purposes, or for rapid prototyping.

Intended Use:

  • Generating Python functions from docstrings or natural language instructions.
  • Solving math problems while showing the reasoning process.

Out-of-Scope Use

This is a specialized model. It will not perform well on tasks outside of basic Python code and grade-school level math, such as general conversation, translation, or creative writing. It has not been trained or evaluated for safety and may produce incorrect or insecure code, as well as flawed mathematical reasoning.

Bias, Risks, and Limitations

This model was trained on the MBPP, opc-sft-stage2, GSM8K, and MATH datasets. Its capabilities are limited to these domains. The model may generate code that is syntactically correct but logically flawed, or math solutions that seem logical but contain calculation errors. Always review and test the generated output before use in production environments.

A notable limitation discovered during development is a potential low-level GPU memory conflict. When this model is loaded into the same runtime as a significantly larger and architecturally different model (like Qwen 7B), its fine-tuned capabilities can be silently overridden, causing it to revert to the base model's behavior. It is recommended to run this model in an isolated process.

How to Get Started with the Model

Use the code below to get started with the model using the transformers library.

from transformers import pipeline, AutoModelForCausalLM, AutoTokenizer

model_id = "DeryFerd/Qwen-Math-Code-Distill-Phi-2"

# Load the tokenizer and model
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    torch_dtype="auto",
    device_map="auto",
    trust_remote_code=True
)

# Create a text-generation pipeline
pipe = pipeline("text-generation", model=model, tokenizer=tokenizer)

# --- Example 1: Coding ---
code_instruction = "Write a Python function that takes a list of strings and returns a new list with all strings converted to uppercase."
prompt = f"Instruct: {code_instruction.strip()}\nOutput:"

outputs = pipe(
    prompt,
    max_new_tokens=256,
    do_sample=False,
    pad_token_id=tokenizer.eos_token_id
)
response = outputs[0]['generated_text'].split("Output:")[1].strip()
print("--- Coding Example ---")
print(response)

# --- Example 2: Math ---
math_instruction = "A bakery has 150 cookies. They sell 60 in the morning and 35 in the afternoon. How many cookies are left at the end of the day?"
prompt = f"Instruct: {math_instruction.strip()}\nOutput:"

outputs = pipe(
    prompt,
    max_new_tokens=512,
    do_sample=False,
    pad_token_id=tokenizer.eos_token_id
)
response = outputs[0]['generated_text'].split("Output:")[1].strip()
print("\n--- Math Example ---")
print(response)

## Training Details

### Training Data

The model was fine-tuned on a combined dataset of **3,474 instruction-response pairs**:
- **2,500 math problems:** A mix of 2,000 samples from the GSM8K dataset and 500 samples from the MATH dataset. Generated using `Qwen2.5-Math-7B-Instruct`.
- **974 coding problems:** A curated subset of the MBPP dataset. Generated using `Qwen2.5-Coder-7B-Instruct`.

### Training Procedure

The model was fine-tuned using the LoRA (Low-Rank Adaptation) method for parameter-efficient fine-tuning (PEFT).

#### Training Hyperparameters

- **Framework:** `trl.SFTTrainer`
- **LoRA `r`:** 16
- **LoRA `alpha`:** 32
- **Target Modules:** `q_proj`, `k_proj`, `v_proj`, `dense`
- **Learning Rate:** 2e-4
- **LR Scheduler:** Constant
- **Epochs:** 3
- **Batch Size:** 1 (with gradient accumulation of 8)
- **Optimizer:** Paged AdamW 8-bit

### Compute Infrastructure

- **Hardware Type:** Single NVIDIA T4 GPU
- **Cloud Provider:** Kaggle Notebooks

## Citation

If you use this model, please consider citing the original Phi-2, MBPP, GSM8K, and MATH papers.
Downloads last month
197
Safetensors
Model size
3B params
Tensor type
F16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for DeryFerd/Qwen2.5-Math-Coder-Distill-Phi-2-4.4K-MixMathCode

Base model

microsoft/phi-2
Finetuned
(380)
this model
Quantizations
1 model

Datasets used to train DeryFerd/Qwen2.5-Math-Coder-Distill-Phi-2-4.4K-MixMathCode