benk04 commited on
Commit
d8ce5c1
1 Parent(s): b910c79

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +93 -3
README.md CHANGED
@@ -1,3 +1,93 @@
1
- ---
2
- license: cc-by-nc-4.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model:
3
+ - NeverSleep/Noromaid-v0.1-mixtral-8x7b-Instruct-v3
4
+ - rombodawg/Open_Gpt4_8x7B_v0.2
5
+ - mistralai/Mixtral-8x7B-Instruct-v0.1
6
+ tags:
7
+ - mergekit
8
+ - merge
9
+ - not-for-all-audiences
10
+ - nsfw
11
+ - mixtral
12
+ license: cc-by-nc-4.0
13
+ ---
14
+
15
+ <!-- description start -->
16
+ My Exllamav2 3.75 bpw quantization of [NoromaidxOpenGPT4-2](https://huggingface.co/NeverSleep/NoromaidxOpenGPT4-2), quantized with default calibration dataset. Included is measurement json, so you can do your own quants.
17
+ > [!IMPORTANT]
18
+ >This bpw is the perfect size for 24GB cards, and can fit 32k context. Make sure to enable 4-bit cache option.
19
+
20
+ >[!NOTE]
21
+ > This model is great for rp and I recommend using the Alpaca presets in SillyTavern.
22
+
23
+ ## Original Card
24
+ ## Description
25
+
26
+ This repo contains fp16 files of NoromaidxOpenGPT4-2.
27
+
28
+ The model was created by merging Noromaid-8x7b-Instruct with Open_Gpt4_8x7B_v0.2 the exact same way [Rombodawg](https://huggingface.co/rombodawg) done his merge.
29
+
30
+ The only difference between [NoromaidxOpenGPT4-1](https://huggingface.co/NeverSleep/NoromaidxOpenGPT4-1/) and [NoromaidxOpenGPT4-2](https://huggingface.co/NeverSleep/NoromaidxOpenGPT4-2/) is that the first iteration use Mixtral-8x7B as a base for the merge (f16), where the second use Open_Gpt4_8x7B_v0.2 as a base (bf16).
31
+
32
+ After further testing and usage, the two model was released, because they each have their own qualities.
33
+
34
+ You can download the imatrix file to do many other quant [HERE](https://huggingface.co/NeverSleep/NoromaidxOpenGPT4-2/blob/main/imatrix-2.dat).
35
+ <!-- description end -->
36
+ <!-- prompt-template start -->
37
+ ### Prompt template:
38
+
39
+ ## Alpaca
40
+
41
+ ```
42
+ ### Instruction:
43
+ {system prompt}
44
+
45
+ ### Input:
46
+ {prompt}
47
+
48
+ ### Response:
49
+ {output}
50
+ ```
51
+
52
+ ## Mistral
53
+
54
+ ```
55
+ [INST] {prompt} [/INST]
56
+ ```
57
+
58
+ ## Merge Details
59
+ ### Merge Method
60
+
61
+ This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [rombodawg/Open_Gpt4_8x7B_v0.2](https://huggingface.co/rombodawg/Open_Gpt4_8x7B_v0.2) as a base.
62
+
63
+ ### Models Merged
64
+
65
+ The following models were included in the merge:
66
+ * [NeverSleep/Noromaid-v0.1-mixtral-8x7b-Instruct-v3](https://huggingface.co/NeverSleep/Noromaid-v0.1-mixtral-8x7b-Instruct-v3)
67
+ * [mistralai/Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1)
68
+
69
+ ### Configuration
70
+
71
+ The following YAML configuration was used to produce this model:
72
+
73
+ ```yaml
74
+ models:
75
+ - model: mistralai/Mixtral-8x7B-Instruct-v0.1
76
+ parameters:
77
+ density: .5
78
+ weight: 1
79
+ - model: NeverSleep/Noromaid-v0.1-mixtral-8x7b-Instruct-v3
80
+ parameters:
81
+ density: .5
82
+ weight: .7
83
+ merge_method: ties
84
+ base_model: rombodawg/Open_Gpt4_8x7B_v0.2
85
+ parameters:
86
+ normalize: true
87
+ int8_mask: true
88
+ dtype: bfloat16
89
+ ```
90
+
91
+ ### Support
92
+
93
+ If you want to support us, you can [here](https://ko-fi.com/undiai).