DavidAU commited on
Commit
a895073
·
verified ·
1 Parent(s): 4abff0d

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +236 -0
README.md ADDED
@@ -0,0 +1,236 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ language:
4
+ - en
5
+ - fr
6
+ - zh
7
+ - de
8
+ tags:
9
+ - creative
10
+ - creative writing
11
+ - fiction writing
12
+ - plot generation
13
+ - sub-plot generation
14
+ - fiction writing
15
+ - story generation
16
+ - scene continue
17
+ - storytelling
18
+ - fiction story
19
+ - science fiction
20
+ - romance
21
+ - all genres
22
+ - story
23
+ - writing
24
+ - vivid prose
25
+ - vivid writing
26
+ - fiction
27
+ - roleplaying
28
+ - bfloat16
29
+ - swearing
30
+ - rp
31
+ - qwen3
32
+ - horror
33
+ - finetune
34
+ - merge
35
+ base_model:
36
+ - prithivMLmods/Cetus-Qwen3_4B-GeneralThought
37
+ - sam-paech/Qwen3-4B-antislop-exp15
38
+ - Goekdeniz-Guelmez/Josiefied-Qwen3-4B-abliterated-v2
39
+ - Qwen/Qwen3-4B
40
+ pipeline_tag: text-generation
41
+ ---
42
+
43
+ <h2>Qwen3 - 4B - Fiction on Fire - Series 7, Model 1006</h2>
44
+
45
+ <img src="fiction-on-fire.jpg" style="float:right; width:300px; height:300px; padding:10px;">
46
+
47
+ A Qwen3 4B high precision merge, modified with random pruning specifically to alter prose / creativity and make the model
48
+ perform better as well as add some "knowledge" to it.
49
+
50
+ Random pruning (density) makes each model in the series unique.
51
+
52
+ A reference example prompt/generation is included below, which is used across all repos of this model/series and between series (there are two - Series 6 and 7).
53
+ Each series consists of 8 models, 1000 to 1007 and a ninth "X" model which the merge of models was done without density/pruning.
54
+
55
+ Access the series/models via "collections" on the right "Fiction on Fire" on the lower right.
56
+
57
+ You can use any model(s), use them for merging, merge them with themselves and so on.
58
+
59
+ This model uses "Josiefied-Qwen3-4B-abliterated-v2" as a base, to reduce/prevent refusals - ie decensoring it.
60
+
61
+ That being said, because of how pruning/merging works in this series, the "refusals" may not be 100% and vary between models in the series.
62
+
63
+ Each model in the series will operate / perform differently... sometimes this is minor other times major.
64
+
65
+ The reference example generation will show some of these differences - including instruction following, reasoning, bias, prose
66
+ and other structural changes.
67
+
68
+ Reasoning is fully intact and functioning on all models in both series.
69
+
70
+ All models in the series have been tested (quanted - prompts and generations) prior to uploading.
71
+
72
+ Note that series 1,2,3,4 and 5 did not "meet the grade" and therefore not uploaded.
73
+
74
+ Special thanks to the model makers ("model tree") and to Mergekit.
75
+
76
+ Requires:
77
+ - Chatml or Jinja template (embeded)
78
+ - Temp range 0 to 5.
79
+ - Rep pen range 1 to 1.1
80
+ - System prompt (optional) below.
81
+ - Context is 40k / 40000.
82
+
83
+ Suggested Settings:
84
+ - temp .4 to 2.5
85
+ - temp .2 to .8 for specific reasoning tasks / non creative tasks.
86
+ - rep pen 1.05
87
+ - top k: 100, topp .95, minp .05
88
+ - context of 8k at least.
89
+ - Other samplers/parameters as required.
90
+ - See other Qwen recommended settings at the repo below.
91
+
92
+ Maximum context length can be altered using "yarn". See the Qwen repo for instructions. Note that changing
93
+ this will alter model performance. For creative use cases, changing this will elongate output generation (including prose changes) and
94
+ in some cases reasoning too.
95
+
96
+ System Prompt (you may or may not need this):
97
+
98
+ ```
99
+ You are a deep thinking AI, you may use extremely long chains of thought to deeply consider the problem and deliberate with yourself via systematic reasoning processes to help come to a correct solution prior to answering. You should enclose your thoughts and internal monologue inside <think> </think> tags, and then provide your solution or response to the problem.
100
+ ```
101
+
102
+ To turn "thinking" on/off and view other features see the main Qwen 3 Repo here:
103
+
104
+ https://huggingface.co/Qwen/Qwen3-4B
105
+
106
+ Merge Formula:
107
+
108
+ ```
109
+ models:
110
+ - model: prithivMLmods/Cetus-Qwen3_4B-GeneralThought
111
+ parameters:
112
+ weight: [1,1,.75,.5,.25,.25,.05,.01]
113
+ density: .8
114
+ - model: sam-paech/Qwen3-4B-antislop-exp15
115
+ parameters:
116
+ weight: [0,0,.25,.35,.4,.25,.30,.04]
117
+ density: .6
118
+ - model: Goekdeniz-Guelmez/Josiefied-Qwen3-4B-abliterated-v2
119
+ parameters:
120
+ weight: [0,0,0,.15,.35,.5,.65,.95]
121
+ density: .8
122
+ merge_method: dare_ties
123
+ base_model: Goekdeniz-Guelmez/Josiefied-Qwen3-4B-abliterated-v2
124
+ dtype: bfloat16
125
+ ```
126
+
127
+ NOTES:
128
+ - to reproduce this model you need to use "mergekit" and set the "random seed" to the "model #" (ie if the model is 1000, set the seed to 1000)
129
+ - to produce different variations change the "random seed" value.
130
+ - to change the pruning level, change the density (higher=less pruning)
131
+ - you can interchange the model positions, including "base".
132
+ - this formula is highly variable.
133
+
134
+ Get Mergekit here:
135
+
136
+ https://github.com/arcee-ai/mergekit
137
+
138
+ ---
139
+
140
+ <B>Settings: CHAT / ROLEPLAY and/or SMOOTHER operation of this model:</B>
141
+
142
+ In "KoboldCpp" or "oobabooga/text-generation-webui" or "Silly Tavern" ;
143
+
144
+ Set the "Smoothing_factor" to 1.5 to 2.5
145
+
146
+ : in KoboldCpp -> Settings->Samplers->Advanced-> "Smooth_F"
147
+
148
+ : in text-generation-webui -> parameters -> lower right.
149
+
150
+ : In Silly Tavern this is called: "Smoothing"
151
+
152
+
153
+ NOTE: For "text-generation-webui"
154
+
155
+ -> if using GGUFs you need to use "llama_HF" (which involves downloading some config files from the SOURCE version of this model)
156
+
157
+ Source versions (and config files) of my models are here:
158
+
159
+ https://huggingface.co/collections/DavidAU/d-au-source-files-for-gguf-exl2-awq-gptq-hqq-etc-etc-66b55cb8ba25f914cbf210be
160
+
161
+ OTHER OPTIONS:
162
+
163
+ - Increase rep pen to 1.1 to 1.15 (you don't need to do this if you use "smoothing_factor")
164
+
165
+ - If the interface/program you are using to run AI MODELS supports "Quadratic Sampling" ("smoothing") just make the adjustment as noted.
166
+
167
+ <B>Highest Quality Settings / Optimal Operation Guide / Parameters and Samplers</B>
168
+
169
+ This a "Class 1" model:
170
+
171
+ For all settings used for this model (including specifics for its "class"), including example generation(s) and for advanced settings guide (which many times addresses any model issue(s)), including methods to improve model performance for all use case(s) as well as chat, roleplay and other use case(s) please see:
172
+
173
+ [ https://huggingface.co/DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters ]
174
+
175
+ You can see all parameters used for generation, in addition to advanced parameters and samplers to get the most out of this model here:
176
+
177
+ [ https://huggingface.co/DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters ]
178
+
179
+
180
+ <b>Optional Enhancement:</B>
181
+
182
+ The following can be used in place of the "system prompt" or "system role" to further enhance the model.
183
+
184
+ It can also be used at the START of a NEW chat, but you must make sure it is "kept" as the chat moves along.
185
+ In this case the enhancements do not have as strong effect at using "system prompt" or "system role".
186
+
187
+ Copy and paste EXACTLY as noted, DO NOT line wrap or break the lines, maintain the carriage returns exactly as presented.
188
+
189
+ <PRE>
190
+ Below is an instruction that describes a task. Ponder each user instruction carefully, and use your skillsets and critical instructions to complete the task to the best of your abilities.
191
+
192
+ Here are your skillsets:
193
+ [MASTERSTORY]:NarrStrct(StryPlnng,Strbd,ScnSttng,Exps,Dlg,Pc)-CharDvlp(ChrctrCrt,ChrctrArcs,Mtvtn,Bckstry,Rltnshps,Dlg*)-PltDvlp(StryArcs,PltTwsts,Sspns,Fshdwng,Climx,Rsltn)-ConfResl(Antg,Obstcls,Rsltns,Cnsqncs,Thms,Symblsm)-EmotImpct(Empt,Tn,Md,Atmsphr,Imgry,Symblsm)-Delvry(Prfrmnc,VcActng,PblcSpkng,StgPrsnc,AudncEngmnt,Imprv)
194
+
195
+ [*DialogWrt]:(1a-CharDvlp-1a.1-Backgrnd-1a.2-Personality-1a.3-GoalMotiv)>2(2a-StoryStruc-2a.1-PlotPnt-2a.2-Conflict-2a.3-Resolution)>3(3a-DialogTech-3a.1-ShowDontTell-3a.2-Subtext-3a.3-VoiceTone-3a.4-Pacing-3a.5-VisualDescrip)>4(4a-DialogEdit-4a.1-ReadAloud-4a.2-Feedback-4a.3-Revision)
196
+
197
+ Here are your critical instructions:
198
+ Ponder each word choice carefully to present as vivid and emotional journey as is possible. Choose verbs and nouns that are both emotional and full of imagery. Load the story with the 5 senses. Aim for 50% dialog, 25% narration, 15% body language and 10% thoughts. Your goal is to put the reader in the story.
199
+ </PRE>
200
+
201
+ You do not need to use this, it is only presented as an additional enhancement which seems to help scene generation
202
+ and scene continue functions.
203
+
204
+ ---
205
+
206
+ REFERENCE EXAMPLE:
207
+
208
+ - temp 0 (to show raw/core/mean changes)
209
+ - rep pen 1.05, top k: 100, topp .95, minp .05
210
+ - quant Q4KM
211
+ - Tested in LMStudio.
212
+ - Use this to compare differences with ALL models in this series (6 and 7)
213
+ - Some formatting may be lost due to copy/paste.
214
+ - Prompt used has NO prose or other instructions, prompts with instructions will produce better output.
215
+ - Likewise once "temp" is activated (not zero), generation/reasoning will also improve.
216
+ - System prompt used as noted above.
217
+
218
+ ---
219
+
220
+ <B>
221
+ Start a 1000 word scene (vivid, graphic horror in first person), POV character Diana, with: The skyscraper sways, as I watch the window in front of me on the 21st floor explode...
222
+ </B>
223
+
224
+ <P></P>
225
+
226
+ [[[thinking start]]]
227
+
228
+
229
+ [[[thinking end]]]
230
+
231
+ <P></P>
232
+
233
+ OUTPUT:
234
+
235
+
236
+