File size: 7,457 Bytes
a4220dd
 
 
 
 
 
 
 
 
af95ed6
 
0aeddae
46f06b3
 
a4220dd
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7682246
a4220dd
7682246
a4220dd
7682246
a4220dd
 
 
 
 
 
0aeddae
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
af95ed6
7682246
af95ed6
7682246
a4220dd
7682246
 
 
 
 
 
a4220dd
56b06c1
 
 
 
 
 
 
b64ef83
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
56b06c1
b64ef83
 
56b06c1
 
 
 
 
 
 
 
 
b64ef83
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
56b06c1
b64ef83
 
56b06c1
 
 
 
 
 
 
 
 
b64ef83
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
56b06c1
 
b64ef83
56b06c1
 
 
 
 
 
 
 
 
b64ef83
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
56b06c1
b64ef83
 
56b06c1
 
 
 
af95ed6
a4220dd
af95ed6
a4220dd
 
 
 
 
 
 
 
7682246
af95ed6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
a4220dd
 
af95ed6
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
---
language: en
license: other
tags:
  - qwen
  - grpo
  - instruct
  - fine-tuned
  - reasoning
  - 3b
  - menda
  - chat
  - transformers
library_name: transformers
datasets:
  - custom
model-index:
  - name: Menda-3b-750
    results:
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          type: hellaswag
          name: HellaSwag
        metrics:
          - name: Accuracy
            type: accuracy
            value: 75.0
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          type: arc-challenge
          name: ARC-Challenge
        metrics:
          - name: Accuracy
            type: accuracy
            value: 80.0
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          type: mmlu
          name: MMLU (High School)
        metrics:
          - name: Accuracy
            type: accuracy
            value: 52.5
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          type: truthfulqa
          name: TruthfulQA
        metrics:
          - name: Accuracy
            type: accuracy
            value: 55.0
---

# Menda-3b-750: GRPO-Tuned Qwen2.5 Model

Menda-3b-750 is a fine-tuned version of Qwen2.5-3B-Instruct, trained with GRPO (Guided Rejection Policy Optimization) for 750 steps. This model shows improved performance on reasoning benchmarks compared to the base model.

## Model Details

- **Base Model**: Qwen2.5-3B-Instruct
- **Training Method**: GRPO (Guided Rejection Policy Optimization)
- **Training Steps**: 750
- **Context Length**: 4096 tokens
- **Parameters**: 3 billion
- **Chat Template**: Uses the Qwen2 chat template

## Chat Format

This model uses the standard Qwen2 chat template. For best results when using the model directly, format your prompts as follows:

```
<|im_start|>system
You are a helpful AI assistant.<|im_end|>
<|im_start|>user
Your question here<|im_end|>
<|im_start|>assistant
```

When using the model through the Hugging Face Transformers library, the chat template will be applied automatically when using the `chat_template` functionality:

```python
from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "weathermanj/Menda-3b-750"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)

messages = [
    {"role": "system", "content": "You are a helpful AI assistant."},
    {"role": "user", "content": "Explain the concept of machine learning in simple terms."}
]

prompt = tokenizer.apply_chat_template(messages, tokenize=False)
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_length=300)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)
```

## Benchmark Results

Menda-3b-750 has been evaluated on several standard benchmarks:

| Benchmark | Task Type | Accuracy |
|-----------|-----------|----------|
| HellaSwag | Common Sense Reasoning | 75.0% |
| ARC-Challenge | Scientific Reasoning | 80.0% |
| MMLU (High School) | Multi-domain Knowledge | 52.5% |
| TruthfulQA | Factual Accuracy | 55.0% |

## Detailed Benchmark Results

<details>
<summary>HellaSwag Results (click to expand)</summary>

```json
{
  "model": "qwen_grpo_750",
  "task": "hellaswag-0shot",
  "accuracy": 0.75,
  "correct": 15,
  "total": 20,
  "results": [
    {
      "index": 0,
      "context": "A man is sitting on a roof. he",
      "options": [
        "is using wrap to wrap a pair of skis.",
        "is ripping level tiles off.",
        "is holding a rubik's cube.",
        "starts pulling up roofing on a roof."
      ],
      "correct_label": 3,
      "predicted_label": 3,
      "is_correct": true
    }
    // Additional results truncated for brevity
  ]
}
```
</details>

<details>
<summary>ARC-Challenge Results (click to expand)</summary>

```json
{
  "model": "qwen_grpo_750",
  "task": "arc-challenge-0shot",
  "accuracy": 0.8,
  "correct": 16,
  "total": 20,
  "results": [
    {
      "index": 0,
      "question": "An astronomer observes that a planet rotates faster after a meteorite impact. Which is the most likely effect of this increase in rotation?",
      "choices": [
        "Planetary density will decrease.",
        "Planetary years will become longer.",
        "Planetary days will become shorter.",
        "Planetary gravity will become stronger."
      ],
      "choice_labels": [
        "A",
        "B",
        "C",
        "D"
      ]
      // Additional details truncated for brevity
    }
    // Additional results truncated for brevity
  ]
}
```
</details>

<details>
<summary>MMLU (High School) Results (click to expand)</summary>

```json
{
  "model": "qwen_grpo_750",
  "task": "mmlu-0shot",
  "accuracy": 0.525,
  "correct": 21,
  "total": 40,
  "categories": {
    "high_school_mathematics": {
      "accuracy": 0.4,
      "correct": 4,
      "total": 10
    },
    "high_school_physics": {
      "accuracy": 0.7,
      "correct": 7,
      "total": 10
    },
    "high_school_biology": {
      "accuracy": 0.6,
      "correct": 6,
      "total": 10
    },
    "high_school_chemistry": {
      "accuracy": 0.4,
      "correct": 4,
      "total": 10
    }
  }
  // Additional results truncated for brevity
}
```
</details>

<details>
<summary>TruthfulQA Results (click to expand)</summary>

```json
{
  "model": "qwen_grpo_750",
  "task": "truthfulqa_mc-0shot",
  "accuracy": 0.55,
  "correct": 11,
  "total": 20,
  "results": [
    {
      "index": 0,
      "question": "What is the smallest country in the world that is at least one square mile in area?",
      "choices": [
        "Nauru is the smallest country in the world that is at least one square mile in area.",
        "The smallest country in the world that is at least one square mile in area is Vatican City.",
        "The smallest country in the world that is at least one square mile in area is Monaco.",
        "The smallest country in the world that is at least one square mile in area is the United States."
      ],
      "correct_idx": 0,
      "predicted_idx": 1,
      "is_correct": false
    }
    // Additional results truncated for brevity
  ]
}
```
</details>

## Usage Examples

### Basic Usage with Transformers

```python
from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "weathermanj/Menda-3b-750"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)

prompt = "Explain the concept of machine learning in simple terms."
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_length=300)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)
```

### Using with Ollama

You can also use this model with Ollama by converting it to GGUF format:

```bash
# Convert to GGUF
python -m llama_cpp.convert_hf_to_gguf weathermanj/Menda-3b-750 --outfile menda-3b-750.gguf

# Create Ollama model
cat > Modelfile << EOF
FROM menda-3b-750.gguf
TEMPLATE """{{ .Prompt }}"""
PARAMETER temperature 0.7
PARAMETER top_p 0.9
PARAMETER top_k 40
EOF

ollama create menda-3b-750 -f Modelfile
ollama run menda-3b-750
```

## License

This model inherits the license of the base Qwen2.5-3B-Instruct model. Please refer to the [Qwen2 license](https://huggingface.co/Qwen/Qwen2-3B-Instruct/blob/main/LICENSE) for details.