Delta-Vector commited on
Commit
44e01ec
·
verified ·
1 Parent(s): 6815390

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +222 -0
README.md ADDED
@@ -0,0 +1,222 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ License: agpl-3.0
3
+ Language:
4
+ - En
5
+ Pipeline_tag: text-generation
6
+ Base_model: google/gemma-2-9b
7
+ Tags:
8
+ - Chat
9
+ license: agpl-3.0
10
+ datasets:
11
+ - NewEden/Claude-Instruct-5K
12
+ - anthracite-org/kalo-opus-instruct-22k-no-refusal
13
+ - Epiculous/SynthRP-Gens-v1.1-Filtered-n-Cleaned
14
+ - lodrick-the-lafted/kalo-opus-instruct-3k-filtered
15
+ - anthracite-org/nopm_claude_writing_fixed
16
+ - Epiculous/Synthstruct-Gens-v1.1-Filtered-n-Cleaned
17
+ - anthracite-org/kalo_opus_misc_240827
18
+ - anthracite-org/kalo_misc_part2
19
+ tags:
20
+ - chat
21
+ ---
22
+
23
+
24
+ A earlier checkpoint of the [Magnum 9B V4], using the same configuration as [Tor-8B]() but on Gemma rather then Nemo-8B, A finetune made for creative writing and roleplay tasks, Finetuned ontop of the base Gemma2 9B model, I trained the model for 4 epochs, with the 4 epoch checkpoint becoming the V4 Magnum 9B and the 2 epoch checkpoint becoming my own personal release. This model aims to have good prose and writing while not as `Suggestive` as Magnum models usually are, along with keeping some of the intelligence that was nice to have with the Gemma2 family.
25
+
26
+ # Quants
27
+
28
+ GGUF:
29
+
30
+ EXL2:
31
+
32
+
33
+ ## Prompting
34
+ Model has been Instruct tuned with the ChatML formatting. A typical input would look like this:
35
+
36
+ ```py
37
+ """<|im_start|>system
38
+ system prompt<|im_end|>
39
+ <|im_start|>user
40
+ Hi there!<|im_end|>
41
+ <|im_start|>assistant
42
+ Nice to meet you!<|im_end|>
43
+ <|im_start|>user
44
+ Can I ask a question?<|im_end|>
45
+ <|im_start|>assistant
46
+ """
47
+ ```
48
+ ## System Prompting
49
+
50
+ I would highly recommend using Sao10k's Euryale System prompt, But the "Roleplay Simple" system prompt provided within SillyTavern will work aswell.
51
+
52
+ ```
53
+ Currently, your role is {{char}}, described in detail below. As {{char}}, continue the narrative exchange with {{user}}.
54
+
55
+ <Guidelines>
56
+ • Write upto 200 words.
57
+ • Maintain the character persona but allow it to evolve with the story.
58
+ • Be creative and proactive. Drive the story forward, introducing plotlines and events when relevant.
59
+ • All types of outputs are encouraged; respond accordingly to the narrative.
60
+ • Include dialogues, actions, and thoughts in each response.
61
+ • Utilize all five senses to describe scenarios within {{char}}'s dialogue.
62
+ • Use emotional symbols such as "!" and "~" in appropriate contexts.
63
+ • Incorporate onomatopoeia when suitable.
64
+ • Allow time for {{user}} to respond with their own input, respecting their agency.
65
+ • Act as secondary characters and NPCs as needed, and remove them when appropriate.
66
+ • When prompted for an Out of Character [OOC:] reply, answer neutrally and in plaintext, not as {{char}}.
67
+ </Guidelines>
68
+
69
+ <Forbidden>
70
+ • Writing more then 200 words.
71
+ • Using excessive literary embellishments and purple prose unless dictated by {{char}}'s persona.
72
+ • Writing for, speaking, thinking, acting, or replying as {{user}} in your response.
73
+ • Repetitive and monotonous outputs.
74
+ • Positivity bias in your replies.
75
+ • Being overly extreme or NSFW when the narrative context is inappropriate.
76
+ </Forbidden>
77
+
78
+ Follow the instructions in <Guidelines></Guidelines>, avoiding the items listed in <Forbidden></Forbidden>.
79
+
80
+ ```
81
+
82
+
83
+ ## Axolotl config
84
+
85
+ <details><summary>See axolotl config</summary>
86
+
87
+ Axolotl version: `0.4.1`
88
+ ```yaml
89
+ base_model: /workspace/data/gemma-2-9b-chatml
90
+ model_type: AutoModelForCausalLM
91
+ tokenizer_type: AutoTokenizer
92
+
93
+ plugins:
94
+ - axolotl.integrations.liger.LigerPlugin
95
+ liger_rope: false
96
+ liger_rms_norm: false
97
+ liger_swiglu: true
98
+ liger_cross_entropy: true
99
+ liger_fused_linear_cross_entropy: false
100
+
101
+ load_in_8bit: false
102
+ load_in_4bit: false
103
+ strict: false
104
+
105
+ datasets:
106
+ - path: anthracite-core/c2_logs_16k_llama_v1.1
107
+ type: sharegpt
108
+ conversation: chatml
109
+ - path: NewEden/Claude-Instruct-5K
110
+ type: sharegpt
111
+ conversation: chatml
112
+ - path: anthracite-org/kalo-opus-instruct-22k-no-refusal
113
+ type: sharegpt
114
+ conversation: chatml
115
+ - path: Epiculous/SynthRP-Gens-v1.1-Filtered-n-Cleaned
116
+ type: sharegpt
117
+ conversation: chatml
118
+ - path: lodrick-the-lafted/kalo-opus-instruct-3k-filtered
119
+ type: sharegpt
120
+ conversation: chatml
121
+ - path: anthracite-org/nopm_claude_writing_fixed
122
+ type: sharegpt
123
+ conversation: chatml
124
+ - path: Epiculous/Synthstruct-Gens-v1.1-Filtered-n-Cleaned
125
+ type: sharegpt
126
+ conversation: chatml
127
+ - path: anthracite-org/kalo_opus_misc_240827
128
+ type: sharegpt
129
+ conversation: chatml
130
+ - path: anthracite-org/kalo_misc_part2
131
+ type: sharegpt
132
+ conversation: chatml
133
+ chat_template: chatml
134
+ shuffle_merged_datasets: false
135
+ default_system_message: "You are a helpful assistant that responds to the user."
136
+ dataset_prepared_path: /workspace/data/9b-fft-data
137
+ val_set_size: 0.0
138
+ output_dir: /workspace/data/9b-fft-out
139
+
140
+ sequence_len: 8192
141
+ sample_packing: true
142
+ eval_sample_packing: false
143
+ pad_to_sequence_len: true
144
+
145
+ adapter:
146
+ lora_model_dir:
147
+ lora_r:
148
+ lora_alpha:
149
+ lora_dropout:
150
+ lora_target_linear:
151
+ lora_fan_in_fan_out:
152
+
153
+ wandb_project: 9b-Nemo-config-fft
154
+ wandb_entity:
155
+ wandb_watch:
156
+ wandb_name: attempt-01
157
+ wandb_log_model:
158
+
159
+ gradient_accumulation_steps: 4
160
+ micro_batch_size: 1
161
+ num_epochs: 4
162
+ optimizer: paged_adamw_8bit
163
+ lr_scheduler: cosine
164
+ learning_rate: 0.00001
165
+
166
+ train_on_inputs: false
167
+ group_by_length: false
168
+ bf16: auto
169
+ fp16:
170
+ tf32: false
171
+
172
+ gradient_checkpointing: true
173
+ early_stopping_patience:
174
+ auto_resume_from_checkpoints: true
175
+ local_rank:
176
+ logging_steps: 1
177
+ xformers_attention:
178
+ flash_attention: true
179
+
180
+ warmup_steps: 10
181
+ evals_per_epoch:
182
+ eval_table_size:
183
+ eval_max_new_tokens:
184
+ saves_per_epoch: 1
185
+ debug:
186
+ deepspeed: deepspeed_configs/zero3_bf16.json
187
+ weight_decay: 0.001
188
+ fsdp:
189
+ fsdp_config:
190
+ special_tokens:
191
+ pad_token: <pad>
192
+
193
+ ```
194
+
195
+ </details><br>
196
+
197
+ ## Credits
198
+
199
+
200
+ - [NewEden/Claude-Instruct-5K](NewEden/Claude-Instruct-5K)
201
+ - [anthracite-org/kalo-opus-instruct-22k-no-refusal](https://huggingface.co/datasets/anthracite-org/kalo-opus-instruct-22k-no-refusal)
202
+ - [Epiculous/SynthRP-Gens-v1.1-Filtered-n-Cleaned](https://huggingface.co/datasets/Epiculous/SynthRP-Gens-v1.1-Filtered-n-Cleaned)
203
+ - [lodrick-the-lafted/kalo-opus-instruct-3k-filtered](https://huggingface.co/datasets/lodrick-the-lafted/kalo-opus-instruct-3k-filtered)
204
+ - [anthracite-org/nopm_claude_writing_fixed](https://huggingface.co/datasets/anthracite-org/nopm_claude_writing_fixed)
205
+ - [Epiculous/Synthstruct-Gens-v1.1-Filtered-n-Cleaned](https://huggingface.co/datasets/Epiculous/Synthstruct-Gens-v1.1-Filtered-n-Cleaned)
206
+ - [anthracite-org/kalo_opus_misc_240827](https://huggingface.co/datasets/anthracite-org/kalo_opus_misc_240827)
207
+ - [anthracite-org/kalo_misc_part2](https://huggingface.co/datasets/anthracite-org/kalo_misc_part2)
208
+ - [anthracite-core/c2_logs_16k_llama_v1.1](https://huggingface.co/datasets/anthracite-core/c2_logs_16k_llama_v1.1)
209
+
210
+
211
+ ## Training
212
+ The training was done for 2 epochs. We used 8 x [H100s](https://www.nvidia.com/en-us/data-center/h100/) GPUs graciously provided by [Lucy Knada](https://huggingface.co/lucyknada) for the full-parameter fine-tuning of the model.
213
+
214
+ [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
215
+
216
+ ## Safety
217
+
218
+ Avoid misusing this model, or you’ll need a ‘clicker’ to reset reality. ;)
219
+
220
+ ## Musings
221
+
222
+ One of the members of Anthracite had quite an interesting idea, to finetune a smaller model for 4 epochs at a lower Learning rate as quote "Smaller models learn slower" - [Kalomaze]() provided access to 8 X A40s and We finetuned what now is [Darkens-8B]() for 4 epochs (and it's 2.5 Epoch version released as [Tor-8B]()) and the result was quite impressive, the 4 epoch model was not "overfit" at all and was rather pleasant to use. Lucy Knada then allowed me to do a full parameter finetune with the same configuration as Darkens/Tor-8B (With some minor dataset tweaks) on 8 * H100s, We hosted and tested the models and i ended up giving the green light to release the 4 epoch version at Magnum 9B V4 and released the 2 epoch version as my own. I felt both were extremely good models, but in testing i preferred the 2 epoch. It was not as "suggestive" as magnum models (and Claude RP log trained models) are. It would not dive into Claudeisms right out of the gate and you could use it for both Safe for work and "Other" purposes.