FallenMerick commited on
Commit
18f7952
1 Parent(s): f78b690

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +46 -44
README.md CHANGED
@@ -1,44 +1,46 @@
1
- ---
2
- base_model:
3
- - KoboldAI/LLaMA2-13B-Psyfighter2
4
- - KoboldAI/LLaMA2-13B-Erebus-v3
5
- library_name: transformers
6
- tags:
7
- - mergekit
8
- - merge
9
-
10
- ---
11
- # Psyfighter2-Orca2-Erebus3-2
12
-
13
- This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
14
-
15
- ## Merge Details
16
- ### Merge Method
17
-
18
- This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [KoboldAI/LLaMA2-13B-Psyfighter2](https://huggingface.co/KoboldAI/LLaMA2-13B-Psyfighter2) as a base.
19
-
20
- ### Models Merged
21
-
22
- The following models were included in the merge:
23
- * E:\Machine Learning\Models\Orca2-Flat-BF16
24
- * [KoboldAI/LLaMA2-13B-Erebus-v3](https://huggingface.co/KoboldAI/LLaMA2-13B-Erebus-v3)
25
-
26
- ### Configuration
27
-
28
- The following YAML configuration was used to produce this model:
29
-
30
- ```yaml
31
- models:
32
- - model: KoboldAI/LLaMA2-13B-Psyfighter2
33
- - model: E:\Machine Learning\Models\Orca2-Flat-BF16
34
- parameters:
35
- weight: 1.0
36
- density: 0.4
37
- - model: KoboldAI/LLaMA2-13B-Erebus-v3
38
- parameters:
39
- weight: 1.0
40
- density: 0.2
41
- merge_method: ties
42
- base_model: KoboldAI/LLaMA2-13B-Psyfighter2
43
- dtype: bfloat16
44
- ```
 
 
 
1
+ ---
2
+ base_model:
3
+ - KoboldAI/LLaMA2-13B-Psyfighter2
4
+ - TeeZee/Orca-2-13b_flat
5
+ - KoboldAI/LLaMA2-13B-Erebus-v3
6
+ library_name: transformers
7
+ tags:
8
+ - mergekit
9
+ - merge
10
+
11
+ ---
12
+ # Psyfighter2-Orca2-Erebus3-2
13
+
14
+ This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
15
+
16
+ ## Merge Details
17
+ ### Merge Method
18
+
19
+ This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method.
20
+
21
+ ### Models Merged
22
+
23
+ The following models were included in the merge:
24
+ * [KoboldAI/LLaMA2-13B-Psyfighter2](https://huggingface.co/KoboldAI/LLaMA2-13B-Psyfighter2)
25
+ * [TeeZee/Orca-2-13b_flat](https://huggingface.co/TeeZee/Orca-2-13b_flat)
26
+ * [KoboldAI/LLaMA2-13B-Erebus-v3](https://huggingface.co/KoboldAI/LLaMA2-13B-Erebus-v3)
27
+
28
+ ### Configuration
29
+
30
+ The following YAML configuration was used to produce this model:
31
+
32
+ ```yaml
33
+ models:
34
+ - model: KoboldAI/LLaMA2-13B-Psyfighter2
35
+ - model: TeeZee/Orca-2-13b_flat
36
+ parameters:
37
+ weight: 1.0
38
+ density: 0.4
39
+ - model: KoboldAI/LLaMA2-13B-Erebus-v3
40
+ parameters:
41
+ weight: 1.0
42
+ density: 0.2
43
+ merge_method: ties
44
+ base_model: KoboldAI/LLaMA2-13B-Psyfighter2
45
+ dtype: bfloat16
46
+ ```