File size: 5,551 Bytes
1ca0c02 24f0575 891cbeb 24f0575 891cbeb bdfa374 24f0575 1ca0c02 24f0575 1ca0c02 24f0575 1ca0c02 8b67688 1ca0c02 24f0575 1ca0c02 24f0575 1ca0c02 24f0575 1ca0c02 24f0575 1ca0c02 24f0575 1ca0c02 24f0575 1ca0c02 8b67688 1ca0c02 24f0575 1ca0c02 24f0575 1ca0c02 24f0575 1ca0c02 24f0575 1ca0c02 24f0575 1ca0c02 24f0575 1ca0c02 24f0575 1ca0c02 24f0575 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 |
---
library_name: transformers
tags:
- phi-2
- code-generation
- math
- reasoning
- gsm8k
- distilled
- code
datasets:
- google-research-datasets/mbpp
- gsm8k
- OpenCoder-LLM/opc-sft-stage2
- meta-math/MetaMathQA
language:
- en
base_model:
- microsoft/phi-2
pipeline_tag: text-generation
---
# Model Card for DeryFerd/Qwen-Math-Code-Distill-Phi-2
## Model Details
### Model Description
**UPDATE:** This model is a fine-tuned, versatile version of **`microsoft/phi-2`**, adapted for both **Python code generation** and **step-by-step mathematical reasoning**. The goal of this project was to distill the capabilities of larger "teacher" models (`Qwen2.5-Coder-7B-Instruct` for coding and `Qwen2.5-Math-7B-Instruct` for math) into the compact and efficient Phi-2 architecture.
The model was trained on a combined dataset of Python programming problems (from MBPP and opc-sft-stage2) and grade-school math word problems (from GSM8K and MATH). It is designed to generate not just answers, but also the thought process behind them, mimicking the style of its teachers.
- **Developed by:** DeryFerd
- **Model type:** Causal Language Model
- **Language(s) (NLP):** English
- **License:** MIT
- **Finetuned from model:** `microsoft/phi-2`
### Model Sources
- **Repository:** [https://huggingface.co/DeryFerd/Qwen-Math-Code-Distill-Phi-2](https://huggingface.co/DeryFerd/Qwen-Math-Code-Distill-Phi-2)
## Uses
### Direct Use
This model is intended for direct use in generating Python functions from natural language and solving math word problems with step-by-step explanations. It can be used as a coding/math assistant, for educational purposes, or for rapid prototyping.
**Intended Use:**
* Generating Python functions from docstrings or natural language instructions.
* Solving math problems while showing the reasoning process.
### Out-of-Scope Use
This is a specialized model. It will not perform well on tasks outside of basic Python code and grade-school level math, such as general conversation, translation, or creative writing. It has not been trained or evaluated for safety and may produce incorrect or insecure code, as well as flawed mathematical reasoning.
## Bias, Risks, and Limitations
This model was trained on the MBPP, opc-sft-stage2, GSM8K, and MATH datasets. Its capabilities are limited to these domains. The model may generate code that is syntactically correct but logically flawed, or math solutions that seem logical but contain calculation errors. **Always review and test the generated output before use in production environments.**
A notable limitation discovered during development is a potential **low-level GPU memory conflict**. When this model is loaded into the same runtime as a significantly larger and architecturally different model (like Qwen 7B), its fine-tuned capabilities can be silently overridden, causing it to revert to the base model's behavior. It is recommended to run this model in an isolated process.
## How to Get Started with the Model
Use the code below to get started with the model using the `transformers` library.
```python
from transformers import pipeline, AutoModelForCausalLM, AutoTokenizer
model_id = "DeryFerd/Qwen-Math-Code-Distill-Phi-2"
# Load the tokenizer and model
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype="auto",
device_map="auto",
trust_remote_code=True
)
# Create a text-generation pipeline
pipe = pipeline("text-generation", model=model, tokenizer=tokenizer)
# --- Example 1: Coding ---
code_instruction = "Write a Python function that takes a list of strings and returns a new list with all strings converted to uppercase."
prompt = f"Instruct: {code_instruction.strip()}\nOutput:"
outputs = pipe(
prompt,
max_new_tokens=256,
do_sample=False,
pad_token_id=tokenizer.eos_token_id
)
response = outputs[0]['generated_text'].split("Output:")[1].strip()
print("--- Coding Example ---")
print(response)
# --- Example 2: Math ---
math_instruction = "A bakery has 150 cookies. They sell 60 in the morning and 35 in the afternoon. How many cookies are left at the end of the day?"
prompt = f"Instruct: {math_instruction.strip()}\nOutput:"
outputs = pipe(
prompt,
max_new_tokens=512,
do_sample=False,
pad_token_id=tokenizer.eos_token_id
)
response = outputs[0]['generated_text'].split("Output:")[1].strip()
print("\n--- Math Example ---")
print(response)
## Training Details
### Training Data
The model was fine-tuned on a combined dataset of **3,474 instruction-response pairs**:
- **2,500 math problems:** A mix of 2,000 samples from the GSM8K dataset and 500 samples from the MATH dataset. Generated using `Qwen2.5-Math-7B-Instruct`.
- **974 coding problems:** A curated subset of the MBPP dataset. Generated using `Qwen2.5-Coder-7B-Instruct`.
### Training Procedure
The model was fine-tuned using the LoRA (Low-Rank Adaptation) method for parameter-efficient fine-tuning (PEFT).
#### Training Hyperparameters
- **Framework:** `trl.SFTTrainer`
- **LoRA `r`:** 16
- **LoRA `alpha`:** 32
- **Target Modules:** `q_proj`, `k_proj`, `v_proj`, `dense`
- **Learning Rate:** 2e-4
- **LR Scheduler:** Constant
- **Epochs:** 3
- **Batch Size:** 1 (with gradient accumulation of 8)
- **Optimizer:** Paged AdamW 8-bit
### Compute Infrastructure
- **Hardware Type:** Single NVIDIA T4 GPU
- **Cloud Provider:** Kaggle Notebooks
## Citation
If you use this model, please consider citing the original Phi-2, MBPP, GSM8K, and MATH papers. |