Sorawiz commited on
Commit
1222938
·
verified ·
1 Parent(s): 185c6e5

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +78 -2
README.md CHANGED
@@ -1,7 +1,11 @@
1
  ---
2
  base_model:
 
 
 
 
 
3
  - ReadyArt/The-Omega-Directive-M-24B-v1.0
4
- - Sorawiz/MistralCreative-24B-Test-U
5
  - anthracite-core/Mistral-Small-3.1-24B-Instruct-2503-HF
6
  library_name: transformers
7
  tags:
@@ -9,7 +13,38 @@ tags:
9
  - merge
10
 
11
  ---
12
- # merge
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
13
 
14
  This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
15
 
@@ -29,6 +64,47 @@ The following models were included in the merge:
29
  The following YAML configuration was used to produce this model:
30
 
31
  ```yaml
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
32
  models:
33
  - model: anthracite-core/Mistral-Small-3.1-24B-Instruct-2503-HF
34
  - model: Sorawiz/MistralCreative-24B-Test-U
 
1
  ---
2
  base_model:
3
+ - Sorawiz/MistralCreative-24B-Chat
4
+ - Gryphe/Pantheon-RP-1.8-24b-Small-3.1
5
+ - ReadyArt/Forgotten-Abomination-24B-v4.0
6
+ - ReadyArt/Forgotten-Transgression-24B-v4.1
7
+ - ReadyArt/Gaslight-24B-v1.0
8
  - ReadyArt/The-Omega-Directive-M-24B-v1.0
 
9
  - anthracite-core/Mistral-Small-3.1-24B-Instruct-2503-HF
10
  library_name: transformers
11
  tags:
 
13
  - merge
14
 
15
  ---
16
+ # Chat Template
17
+
18
+ Mistral Instruct
19
+
20
+ ```
21
+ {{ if .System }}<|im_start|>system
22
+ {{ .System }}<|im_end|>
23
+ {{ end }}{{ if .Prompt }}<|im_start|>user
24
+ {{ .Prompt }}<|im_end|>
25
+ {{ end }}<|im_start|>assistant
26
+ {{ .Response }}<|im_end|>
27
+ ```
28
+
29
+ ChatML
30
+
31
+ ```
32
+ {{ if .System }}<|im_start|>system
33
+ {{ .System }}<|im_end|>
34
+ {{ end }}{{ if .Prompt }}<|im_start|>user
35
+ {{ .Prompt }}<|im_end|>
36
+ {{ end }}<|im_start|>assistant
37
+ {{ .Response }}{{ if .Response }}<|im_end|>{{ end }}
38
+ ```
39
+
40
+ # GGUF
41
+
42
+ Thank you [mradermacher](https://huggingface.co/mradermacher) for creating the GGUF versions of this model.
43
+
44
+ * Static quants - [mradermacher/MistralCreative-24B-Instruct-GGUF](https://huggingface.co/mradermacher/MistralCreative-24B-Instruct-GGUF)
45
+ * Imatrix quants - [mradermacher/MistralCreative-24B-Instruct-i1-GGUF](https://huggingface.co/mradermacher/MistralCreative-24B-Instruct-i1-GGUF)
46
+
47
+ # Merge
48
 
49
  This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
50
 
 
64
  The following YAML configuration was used to produce this model:
65
 
66
  ```yaml
67
+ name: Sorawiz/MistralCreative-24B-Test-E
68
+ merge_method: dare_ties
69
+ base_model: Sorawiz/MistralCreative-24B-Chat
70
+ models:
71
+ - model: Sorawiz/MistralCreative-24B-Chat
72
+ parameters:
73
+ weight: 0.20
74
+ - model: Gryphe/Pantheon-RP-1.8-24b-Small-3.1
75
+ parameters:
76
+ weight: 0.20
77
+ - model: ReadyArt/Forgotten-Transgression-24B-v4.1
78
+ parameters:
79
+ weight: 0.30
80
+ - model: ReadyArt/Forgotten-Abomination-24B-v4.0
81
+ parameters:
82
+ weight: 0.30
83
+ parameters:
84
+ density: 1
85
+ tokenizer:
86
+ source: union
87
+ chat_template: auto
88
+ ---
89
+ name: Sorawiz/MistralCreative-24B-Test-U
90
+ merge_method: dare_ties
91
+ base_model: Sorawiz/MistralCreative-24B-Test-E
92
+ models:
93
+ - model: Sorawiz/MistralCreative-24B-Test-E
94
+ parameters:
95
+ weight: 0.3
96
+ - model: ReadyArt/Gaslight-24B-v1.0
97
+ parameters:
98
+ weight: 0.5
99
+ - model: Gryphe/Pantheon-RP-1.8-24b-Small-3.1
100
+ parameters:
101
+ weight: 0.2
102
+ parameters:
103
+ density: 0.70
104
+ tokenizer:
105
+ source: union
106
+ chat_template: auto
107
+ ---
108
  models:
109
  - model: anthracite-core/Mistral-Small-3.1-24B-Instruct-2503-HF
110
  - model: Sorawiz/MistralCreative-24B-Test-U