model_name: "Mistral-7B-Math (Merged FP16 Checkpoint)"
repo: "samzheng/mistral-7b-math-merged"
base_model: name: "unsloth/mistral-7b-instruct-v0.3-bnb-4bit" url: "https://huggingface.co/unsloth/mistral-7b-instruct-v0.3-bnb-4bit"\ task: "Grade-school symbolic math word problems → Python code answers"
fine_tuning: method: "LoRA adapters (r=16, α=16, dropout=0) merged into the base weights, FP16 precision" parameters: r: 16 alpha: 16 dropout: 0
dataset: description: "6.7k Alpaca-formatted Q/A pairs with chain-of-thought + code" splits: - "symboliccode_cot_train" - "symboliccode_cot_validation"
language: python
code:

from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained(
    "samzheng/mistral-7b-math-merged",
    torch_dtype="auto", device_map="auto"
)
tok = AutoTokenizer.from_pretrained("samzheng/mistral-7b-math-merged")

prompt = """Below is an instruction that describes a task...
### Instruction: Solve the problem using step-by-step reasoning and provide Python code.

### Input: Solve for x: 2x + 5 = 17

### Response:
"""
print(tok.decode(model.generate(**tok(prompt, return_tensors="pt").to(model.device),max_new_tokens=256)[0], skip_special_tokens=True))
Downloads last month
10
Safetensors
Model size
3.87B params
Tensor type
F32
·
F16
·
U8
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for samzheng/Mistral-SymbolicMath-7B

Quantized
(180)
this model

Dataset used to train samzheng/Mistral-SymbolicMath-7B