File size: 5,172 Bytes
1c1e82f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
fabcc4a
1c1e82f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
d0ff063
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1c1e82f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
fabcc4a
1c1e82f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
---
license: apache-2.0
language:
- en
base_model:
- Qwen/Qwen2.5-Coder-7B
- open-r1/OlympicCoder-7B
pipeline_tag: text-generation
tags:
- merge
- programming
- code generation
- code
- qwen2
- codeqwen
- chat
- qwen
- qwen-coder
- programming
- code generation
- code
- codeqwen
- moe
- coding
- coder
- qwen2
- chat
- qwen
- qwen-coder
library_name: transformers
---

<h2>Qwen2.5-Wolverine-CODER-11B-V2-128k-ctx</h2>

<img src="wolverine-coder.jpg" style="float:right; width:300px; height:500px; padding:10px;">

This repo contains the full precision source code, in "safe tensors" format to generate GGUFs, GPTQ, EXL2, AWQ, HQQ and other formats. The source code can also be used directly.

"Ripping your programming worries to shreds... fast."

Tipping the scales at 42 layers and 507 tensors... the monster lives.

Two monsters in fact.

This repo has the source code for Version 2.

Each model generates stronger, more compact code with an enhanced understanding of your instructions 
and follows what you tell them to the letter.

These overpowered - yet wickedly fast - CODING ENGINES are based on two of the best coder AIs:

"Qwen2.5-Coder-7B-Instruct" 

[ https://huggingface.co/Qwen/Qwen2.5-Coder-7B-Instruct ]

and

"OlympicCoder-7B"

[ https://huggingface.co/open-r1/OlympicCoder-7B ]

These two models are stuffed into one compact powerhouse 11B merge that is stronger in performance and understanding than both donor models.

---

<B>QUANTS:</b>

---

Special Thanks to Team Mradermacher for the quants:

GGUF:

https://huggingface.co/mradermacher/Qwen2.5-Wolverine-CODER-11B-V2-128k-ctx-GGUF

GGUF-IMATRIX:

https://huggingface.co/mradermacher/Qwen2.5-Wolverine-CODER-11B-V2-128k-ctx-i1-GGUF

---

Quants Q4_K_S - one of each version - are available at the moment. 

AND

Quants IQ3_S - NEO IMATRIX (in house dataset developed by DavidAU), one for each version too.

AND 

Q8 MAX - one for each model.

These are unaltered quants for primary testing, except Q8 MAX with output tensor at bfloat16 (full precision).

The output tensor accounts for 10-20% of the "decision making" in a model.

Suggest smaller quants for simpler projects / smaller code, with Q8 MAX for larger more complex projects // lots of instructions.

LIMITED GGUF REPO HERE:

https://huggingface.co/DavidAU/Qwen2.5-Wolverine-CODER-11B-gguf

(additional quants/ggufs under "quantizations" upper right)

CONFIGS:
- #1 -> OlympicCoder-7B as primary/start, with Qwen2.5-Coder-7B-Instruct as "finalizer".
- #2 -> Qwen2.5-Coder-7B-Instruct primary/start, with OlympicCoder-7B as "finalizer". 

NOTES: 
- Each config/version will be very different from each other.
- Model tested to IQ2_S (NEO Imatrix), fully operational.
- Tool Calling is supported in both versions.
- Source(s) / full quanting to follow // full repos to follow.
- Final model size (including layers/tensors) / config subject to change.

---

Config / Settings

---

Model is set at 128k/131072 context.

Requirements [Qwen 2.5 7B Coder default settings]:
- Temp .5 to .7 (or lower)
- topk: 20, topp: .8, minp: .05
- rep pen: 1.1 (can be lower)
- Jinja Template (embedded) or CHATML template.
- A System Prompt is not required. (ran tests with blank system prompt)

Refer to either "Qwen2.5-Coder-7B-Instruct" and/or "OlympicCoder-7B" repos (above) for additional settings, benchmarks and usage.

---

<H2>Help, Adjustments, Samplers, Parameters and More</H2>

---

<B>Settings: CHAT / ROLEPLAY and/or SMOOTHER operation of this model:</B>

In "KoboldCpp" or  "oobabooga/text-generation-webui" or "Silly Tavern" ;

Set the "Smoothing_factor" to 1.5 

: in KoboldCpp -> Settings->Samplers->Advanced-> "Smooth_F"

: in text-generation-webui -> parameters -> lower right.

: In Silly Tavern this is called: "Smoothing"


NOTE: For "text-generation-webui" 

-> if using GGUFs you need to use "llama_HF" (which involves downloading some config files from the SOURCE version of this model)

Source versions (and config files) of my models are here:

https://huggingface.co/collections/DavidAU/d-au-source-files-for-gguf-exl2-awq-gptq-hqq-etc-etc-66b55cb8ba25f914cbf210be

OTHER OPTIONS:

- Increase rep pen to 1.1 to 1.15 (you don't need to do this if you use "smoothing_factor")

- If the interface/program you are using to run AI MODELS supports "Quadratic Sampling" ("smoothing") just make the adjustment as noted.

<B>Highest Quality Settings / Optimal Operation Guide / Parameters and Samplers</B>

This a "Class 1" model:

For all settings used for this model (including specifics for its "class"), including example generation(s) and for advanced settings guide (which many times addresses any model issue(s)), including methods to improve model performance for all use case(s) as well as chat, roleplay and other use case(s) please see:

[ https://huggingface.co/DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters ]

You can see all parameters used for generation, in addition to advanced parameters and samplers to get the most out of this model here:

[ https://huggingface.co/DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters ]