yamatazen commited on
Commit
8c38c99
·
verified ·
1 Parent(s): 74a98ea

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +40 -39
README.md CHANGED
@@ -1,39 +1,40 @@
1
- ---
2
- base_model:
3
- - yamatazen/Gemma2-Ataraxy-Psycho-9B
4
- - yamatazen/Gemma2-Evelyn-9B
5
- library_name: transformers
6
- tags:
7
- - mergekit
8
- - merge
9
-
10
- ---
11
- # Gemma2-ObsidianLight-9B
12
-
13
- This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
14
-
15
- ## Merge Details
16
- ### Merge Method
17
-
18
- This model was merged using the [SLERP](https://en.wikipedia.org/wiki/Slerp) merge method.
19
-
20
- ### Models Merged
21
-
22
- The following models were included in the merge:
23
- * [yamatazen/Gemma2-Ataraxy-Psycho-9B](https://huggingface.co/yamatazen/Gemma2-Ataraxy-Psycho-9B)
24
- * [yamatazen/Gemma2-Evelyn-9B](https://huggingface.co/yamatazen/Gemma2-Evelyn-9B)
25
-
26
- ### Configuration
27
-
28
- The following YAML configuration was used to produce this model:
29
-
30
- ```yaml
31
- merge_method: slerp
32
- dtype: bfloat16
33
- out_dtype: bfloat16
34
- base_model: yamatazen/Gemma2-Evelyn-9B
35
- models:
36
- - model: yamatazen/Gemma2-Ataraxy-Psycho-9B
37
- parameters:
38
- t: [0.25, 0.3, 0.5, 0.3, 0.25]
39
- ```
 
 
1
+ ---
2
+ base_model:
3
+ - yamatazen/Gemma2-Ataraxy-Psycho-9B
4
+ - yamatazen/Gemma2-Evelyn-9B
5
+ library_name: transformers
6
+ tags:
7
+ - mergekit
8
+ - merge
9
+
10
+ ---
11
+ ![image/png](https://huggingface.co/yamatazen/Gemma2-ObsidianLight-9B/resolve/main/Gemma2-ObsidianLight-9B.png?download=true)
12
+ # Gemma2-ObsidianLight-9B
13
+
14
+ This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
15
+
16
+ ## Merge Details
17
+ ### Merge Method
18
+
19
+ This model was merged using the [SLERP](https://en.wikipedia.org/wiki/Slerp) merge method.
20
+
21
+ ### Models Merged
22
+
23
+ The following models were included in the merge:
24
+ * [yamatazen/Gemma2-Ataraxy-Psycho-9B](https://huggingface.co/yamatazen/Gemma2-Ataraxy-Psycho-9B)
25
+ * [yamatazen/Gemma2-Evelyn-9B](https://huggingface.co/yamatazen/Gemma2-Evelyn-9B)
26
+
27
+ ### Configuration
28
+
29
+ The following YAML configuration was used to produce this model:
30
+
31
+ ```yaml
32
+ merge_method: slerp
33
+ dtype: bfloat16
34
+ out_dtype: bfloat16
35
+ base_model: yamatazen/Gemma2-Evelyn-9B
36
+ models:
37
+ - model: yamatazen/Gemma2-Ataraxy-Psycho-9B
38
+ parameters:
39
+ t: [0.25, 0.3, 0.5, 0.3, 0.25]
40
+ ```