Akila commited on
Commit
ee3f1dc
·
verified ·
1 Parent(s): b3a8e49

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +156 -0
README.md ADDED
@@ -0,0 +1,156 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ library_name: peft
4
+ tags:
5
+ - axolotl
6
+ - generated_from_trainer
7
+ base_model: mistralai/Mistral-7B-v0.1
8
+ model-index:
9
+ - name: Mistral-of-Realms-7b
10
+ results: []
11
+ ---
12
+
13
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
14
+ should probably proofread and complete it, then remove this comment. -->
15
+
16
+ [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
17
+ <details><summary>See axolotl config</summary>
18
+
19
+ axolotl version: `0.4.0`
20
+ ```yaml
21
+ base_model: mistralai/Mistral-7B-v0.1
22
+ base_model_config: mistralai/Mistral-7B-v0.1
23
+ model_type: MistralForCausalLM
24
+ tokenizer_type: LlamaTokenizer
25
+ is_mistral_derived_model: true
26
+ hub_model_id: Mistral-of-Realms-7b
27
+
28
+ load_in_8bit: false
29
+ load_in_4bit: true
30
+ strict: false
31
+
32
+ datasets:
33
+ - path: Akila/ForgottenRealmsWikiDataset
34
+ data_files:
35
+ - specific_formats/FRW-J-axolotl-completion.jsonl
36
+ type: completion
37
+ dataset_prepared_path:
38
+ val_set_size: 0.02
39
+ output_dir: ./qlora-out
40
+
41
+ #using lora for lower cost
42
+ adapter: lora
43
+ lora_r: 8
44
+ lora_alpha: 16
45
+ lora_dropout: 0.05
46
+ lora_target_modules:
47
+ - q_proj
48
+ - v_proj
49
+
50
+ sequence_len: 512
51
+ sample_packing: false
52
+ pad_to_sequence_len: true
53
+
54
+ wandb_project:
55
+ wandb_entity:
56
+ wandb_watch:
57
+ wandb_name:
58
+ wandb_log_model:
59
+
60
+ #only 2 epochs because of small dataset
61
+ gradient_accumulation_steps: 3
62
+ micro_batch_size: 2
63
+ num_epochs: 2
64
+ optimizer: adamw_bnb_8bit
65
+ lr_scheduler: cosine
66
+ learning_rate: 0.0002
67
+
68
+ train_on_inputs: false
69
+ group_by_length: false
70
+ bf16: true
71
+ fp16: false
72
+ tf32: false
73
+
74
+ gradient_checkpointing: true
75
+ early_stopping_patience:
76
+ resume_from_checkpoint:
77
+ local_rank:
78
+ logging_steps: 1
79
+ xformers_attention:
80
+ flash_attention: true
81
+
82
+ warmup_steps: 10
83
+ evals_per_epoch: 4
84
+ eval_table_size:
85
+ eval_table_max_new_tokens: 128
86
+ saves_per_epoch: 1
87
+ debug:
88
+ #default deepspeed, can use more aggresive if needed like zero2, zero3
89
+ deepspeed:
90
+ weight_decay: 0.0
91
+ fsdp:
92
+ fsdp_config:
93
+ special_tokens:
94
+ bos_token: "<s>"
95
+ eos_token: "</s>"
96
+ unk_token: "<unk>"
97
+ ```
98
+
99
+ </details><br>
100
+
101
+ # Mistral-of-Realms-7b
102
+
103
+ This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the [Akila/ForgottenRealmsWikiDataset](https://huggingface.co/datasets/Akila/ForgottenRealmsWikiDataset) dataset.
104
+ It achieves the following results on the evaluation set:
105
+ - Loss: 2.1762
106
+
107
+ ## Model description
108
+
109
+ More information needed
110
+
111
+ ## Intended uses & limitations
112
+
113
+ More information needed
114
+
115
+ ## Training and evaluation data
116
+
117
+ More information needed
118
+
119
+ ## Training procedure
120
+
121
+ ### Training hyperparameters
122
+
123
+ The following hyperparameters were used during training:
124
+ - learning_rate: 0.0002
125
+ - train_batch_size: 2
126
+ - eval_batch_size: 2
127
+ - seed: 42
128
+ - gradient_accumulation_steps: 3
129
+ - total_train_batch_size: 6
130
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
131
+ - lr_scheduler_type: cosine
132
+ - lr_scheduler_warmup_steps: 10
133
+ - num_epochs: 2
134
+
135
+ ### Training results
136
+
137
+ | Training Loss | Epoch | Step | Validation Loss |
138
+ |:-------------:|:-----:|:-----:|:---------------:|
139
+ | 2.4401 | 0.0 | 1 | 2.5991 |
140
+ | 2.3719 | 0.25 | 2224 | 2.2777 |
141
+ | 2.1262 | 0.5 | 4448 | 2.2483 |
142
+ | 2.3942 | 0.75 | 6672 | 2.2234 |
143
+ | 2.3839 | 1.0 | 8896 | 2.2065 |
144
+ | 2.5641 | 1.25 | 11120 | 2.1937 |
145
+ | 2.1295 | 1.5 | 13344 | 2.1821 |
146
+ | 1.7813 | 1.75 | 15568 | 2.1773 |
147
+ | 1.9467 | 2.0 | 17792 | 2.1762 |
148
+
149
+
150
+ ### Framework versions
151
+
152
+ - PEFT 0.7.2.dev0
153
+ - Transformers 4.37.0
154
+ - Pytorch 2.1.2+cu118
155
+ - Datasets 2.16.1
156
+ - Tokenizers 0.15.0