Crystalcareai commited on
Commit
ec61103
·
verified ·
1 Parent(s): b714313

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +186 -0
README.md ADDED
@@ -0,0 +1,186 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ base_model: meta-llama/Meta-Llama-3-8B
4
+ tags:
5
+ - generated_from_trainer
6
+ - axolotl
7
+ model-index:
8
+ - name: out
9
+ results: []
10
+ datasets:
11
+ - cognitivecomputations/Dolphin-2.9
12
+ - teknium/OpenHermes-2.5
13
+ - m-a-p/CodeFeedback-Filtered-Instruction
14
+ - cognitivecomputations/dolphin-coder
15
+ - cognitivecomputations/samantha-data
16
+ - HuggingFaceH4/ultrachat_200k
17
+ - microsoft/orca-math-word-problems-200k
18
+ - abacusai/SystemChat-1.1
19
+ - Locutusque/function-calling-chatml
20
+ - internlm/Agent-FLAN
21
+ ---
22
+
23
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
24
+ should probably proofread and complete it, then remove this comment. -->
25
+
26
+ # Dolphin 2.9.1 Llama 3 8b 🐬
27
+
28
+ Curated and trained by Eric Hartford, Lucas Atkins, and Fernando Fernandes, and Cognitive Computations
29
+
30
+ Discord: https://discord.gg/8fbBeC7ZGx
31
+
32
+ <img src="https://cdn-uploads.huggingface.co/production/uploads/63111b2d88942700629f5771/ldkN1J0WIDQwU4vutGYiD.png" width="600" />
33
+
34
+ We have retrained our LLama-3-8b fine tune to address behavioral issues in our initial 2.9 dataset. Specifically, Systemchat was causing the model to be *too* reliant on the system prompt - additionally, it had an occasional quirk that would cause the model to overly reference the system prompt. We also found generation length was at times not sufficient for any given task. We identified the culprit as Ultrachat. Accounting for these concerns, we removed systemchat and ultrachat from the dataset. It is otherwise identical to dolphin-2.9.
35
+
36
+ My appreciation for the sponsors of Dolphin 2.9:
37
+ - [Crusoe Cloud](https://crusoe.ai/) - provided excellent on-demand 10xL40S node
38
+
39
+ This model is based on Llama-3-8b, and is governed by [META LLAMA 3 COMMUNITY LICENSE AGREEMENT](LICENSE)
40
+
41
+ The base model has 8k context, and the full-weight fine-tuning was with 4k sequence length.
42
+
43
+ It took 2.5 days on 8x L40S provided by Crusoe Cloud
44
+
45
+ This model was trained FFT on all parameters, using ChatML prompt template format.
46
+
47
+ example:
48
+
49
+ ```
50
+ <|im_start|>system
51
+ You are Dolphin, a helpful AI assistant.<|im_end|>
52
+ <|im_start|>user
53
+ {prompt}<|im_end|>
54
+ <|im_start|>assistant
55
+
56
+ ```
57
+
58
+ Dolphin-2.9.1 has a variety of instruction, conversational, and coding skills. It also has initial agentic abilities and supports function calling.
59
+
60
+ Dolphin is uncensored. I have filtered the dataset to remove alignment and bias. This makes the model more compliant. You are advised to implement your own alignment layer before exposing the model as a service. It will be highly compliant with any requests, even unethical ones. Please read my blog post about uncensored models. https://erichartford.com/uncensored-models You are responsible for any content you create using this model. Enjoy responsibly.
61
+
62
+ Dolphin is licensed according to Meta's Llama license. I grant permission for any use, including commercial, that falls within accordance with Meta's Llama-3 license. Dolphin was trained on data generated from GPT4, among other models.
63
+
64
+ [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
65
+ <details><summary>See axolotl config</summary>
66
+
67
+ axolotl version: `0.4.0`
68
+ ```yaml
69
+ base_model: meta-llama/Meta-Llama-3-8B
70
+ model_type: AutoModelForCausalLM
71
+ tokenizer_type: AutoTokenizer
72
+ tokenizer_use_fast: false
73
+
74
+
75
+ load_in_8bit: false
76
+ load_in_4bit: false
77
+ strict: false
78
+ model_config:
79
+
80
+ datasets:
81
+ - path: /workspace/datasets/dolphin-2.9/dolphin201-sharegpt2.jsonl
82
+ type: sharegpt
83
+ conversation: chatml
84
+ - path: /workspace/datasets/dolphin-2.9/dolphin-coder-translate-sharegpt2.jsonl
85
+ type: sharegpt
86
+ conversation: chatml
87
+ - path: /workspace/datasets/dolphin-2.9/dolphin-coder-codegen-sharegpt2.jsonl
88
+ type: sharegpt
89
+ conversation: chatml
90
+ - path: /workspace/datasets/dolphin-2.9/m-a-p_Code-Feedback-sharegpt-unfiltered.jsonl
91
+ type: sharegpt
92
+ conversation: chatml
93
+ - path: /workspace/datasets/dolphin-2.9/m-a-p_CodeFeedback-Filtered-Instruction-sharegpt-unfiltered.jsonl
94
+ type: sharegpt
95
+ conversation: chatml
96
+ - path: /workspace/datasets/dolphin-2.9/not_samantha_norefusals.jsonl
97
+ type: sharegpt
98
+ conversation: chatml
99
+ - path: /workspace/datasets/dolphin-2.9/Orca-Math-resort-unfiltered.jsonl
100
+ type: sharegpt
101
+ conversation: chatml
102
+ - path: /workspace/datasets/dolphin-2.9/agent_instruct_react_unfiltered.jsonl
103
+ type: sharegpt
104
+ conversation: chatml
105
+ - path: /workspace/datasets/dolphin-2.9/toolbench_instruct_j1s1_3k_unfiltered.jsonl
106
+ type: sharegpt
107
+ conversation: chatml
108
+ - path: /workspace/datasets/dolphin-2.9/toolbench_negative_unfiltered.jsonl
109
+ type: sharegpt
110
+ conversation: chatml
111
+ - path: /workspace/datasets/dolphin-2.9/toolbench_react_10p_unfiltered.jsonl
112
+ type: sharegpt
113
+ conversation: chatml
114
+ - path: /workspace/datasets/dolphin-2.9/toolbench_tflan_cot_30p_unfiltered.jsonl
115
+ type: sharegpt
116
+ conversation: chatml
117
+ - path: /workspace/datasets/dolphin-2.9/openhermes200k_unfiltered.jsonl
118
+ type: sharegpt
119
+ conversation: chatml
120
+
121
+ chat_template: chatml
122
+
123
+
124
+ dataset_prepared_path: /workspace/datasets/dolphin-2.9/thingy
125
+ val_set_size: 0.0002
126
+ output_dir: ./out
127
+
128
+ sequence_len: 4096
129
+ sample_packing: true
130
+ pad_to_sequence_len: true
131
+
132
+ gradient_accumulation_steps: 4
133
+ micro_batch_size: 3
134
+ num_epochs: 3
135
+ logging_steps: 1
136
+ optimizer: adamw_8bit
137
+ lr_scheduler: cosine
138
+ learning_rate: 2e-5
139
+
140
+ wandb_project: dolphin-2.9-mixtral-8x22b
141
+ wandb_watch:
142
+ wandb_run_id:
143
+ wandb_log_model:
144
+
145
+ train_on_inputs: false
146
+ group_by_length: false
147
+ bf16: auto
148
+ fp16:
149
+ tf32: false
150
+
151
+ gradient_checkpointing: true
152
+ gradient_checkpointing_kwargs:
153
+ use_reentrant: false
154
+ early_stopping_patience:
155
+ resume_from_checkpoint:
156
+ local_rank:
157
+ logging_steps: 1
158
+ xformers_attention:
159
+ flash_attention: true
160
+ saves_per_epoch: 4
161
+ save_total_limit: 2
162
+ save_steps:
163
+ evals_per_epoch: 4
164
+ eval_sample_packing: false
165
+ debug:
166
+ deepspeed: deepspeed_configs/zero3_bf16.json
167
+ weight_decay: 0.05
168
+ fsdp:
169
+ fsdp_config:
170
+ special_tokens:
171
+ eos_token: "<|im_end|>"
172
+ pad_token: "<|end_of_text|>"
173
+ tokens:
174
+ - "<|im_start|>"
175
+ - "<|im_end|>"
176
+
177
+ ```
178
+
179
+ </details><br>
180
+
181
+ ### Framework versions
182
+
183
+ - Transformers 4.40.0
184
+ - Pytorch 2.2.2+cu121
185
+ - Datasets 2.18.0
186
+ - Tokenizers 0.19.1