sleepdeprived3 commited on
Commit
c21c8cd
·
verified ·
1 Parent(s): e5785b3

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +61 -0
README.md ADDED
@@ -0,0 +1,61 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model:
3
+ - nbeerbower/mistral-nemo-gutenberg-12B
4
+ - TheDrummer/UnslopNemo-12B-v3
5
+ - ReadyArt/Forgotten-Safeword-12B-V3.0
6
+ - Trappu/Magnum-Picaro-0.7-v2-12b
7
+ - ReadyArt/The-Omega-Directive-M-12B-v1.0
8
+ library_name: transformers
9
+ tags:
10
+ - mergekit
11
+ - merge
12
+
13
+ ---
14
+ <img src="./ots75.jpg" class="waifu-img" alt="Protocol Mascot">
15
+ # output-model
16
+
17
+ This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
18
+
19
+ ## Merge Details
20
+ ### Merge Method
21
+
22
+ This model was merged using the [DARE TIES](https://arxiv.org/abs/2311.03099) merge method using [ReadyArt/The-Omega-Directive-M-12B-v1.0](https://huggingface.co/ReadyArt/The-Omega-Directive-M-12B-v1.0) as a base.
23
+
24
+ ### Models Merged
25
+
26
+ The following models were included in the merge:
27
+ * [nbeerbower/mistral-nemo-gutenberg-12B](https://huggingface.co/nbeerbower/mistral-nemo-gutenberg-12B)
28
+ * [TheDrummer/UnslopNemo-12B-v3](https://huggingface.co/TheDrummer/UnslopNemo-12B-v3)
29
+ * [ReadyArt/Forgotten-Safeword-12B-V3.0](https://huggingface.co/ReadyArt/Forgotten-Safeword-12B-V3.0)
30
+ * [Trappu/Magnum-Picaro-0.7-v2-12b](https://huggingface.co/Trappu/Magnum-Picaro-0.7-v2-12b)
31
+
32
+ ### Configuration
33
+
34
+ The following YAML configuration was used to produce this model:
35
+
36
+ ```yaml
37
+ merge_method: dare_ties
38
+ base_model: ReadyArt/The-Omega-Directive-M-12B-v1.0
39
+ models:
40
+ - model: ReadyArt/The-Omega-Directive-M-12B-v1.0
41
+ parameters:
42
+ weight: 0.2
43
+ - model: ReadyArt/Forgotten-Safeword-12B-V3.0
44
+ parameters:
45
+ weight: 0.2
46
+ - model: TheDrummer/UnslopNemo-12B-v3
47
+ parameters:
48
+ weight: 0.2
49
+ - model: Trappu/Magnum-Picaro-0.7-v2-12b
50
+ parameters:
51
+ weight: 0.2
52
+ - model: nbeerbower/mistral-nemo-gutenberg-12B
53
+ parameters:
54
+ weight: 0.2
55
+ parameters:
56
+ density: 0.3
57
+ low_memory: true
58
+ tokenizer:
59
+ source: union
60
+ chat_template: auto
61
+ ```