Lewdiculous commited on
Commit
d336587
·
verified ·
1 Parent(s): 61d3876

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +70 -0
README.md ADDED
@@ -0,0 +1,70 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model:
3
+ - macadeliccc/WestLake-7B-v2-laser-truthy-dpo
4
+ - ChaoticNeutrals/This_is_fine_7B
5
+ library_name: transformers
6
+ tags:
7
+ - mistral
8
+ - quantized
9
+ - text-generation-inference
10
+ - mergekit
11
+ - merge
12
+ pipeline_tag: text-generation
13
+ inference: false
14
+ ---
15
+
16
+ # **GGUF-Imatrix quantizations for [ChaoticNeutrals/Prodigy_7B](https://huggingface.co/ChaoticNeutrals/Prodigy_7B/).**
17
+
18
+ *If you want any specific quantization to be added, feel free to ask.*
19
+
20
+ All credits belong to the [creator](https://huggingface.co/ChaoticNeutrals/).
21
+
22
+ `Base⇢ GGUF(F16)⇢ Imatrix-Data(F16)⇢ GGUF(Imatrix-Quants)`
23
+
24
+ The new **IQ3_S** quant-option has shown to be better than the old Q3_K_S, so I added that instead of the later. Only supported in `koboldcpp-1.59.1` or higher.
25
+
26
+ Using [llama.cpp](https://github.com/ggerganov/llama.cpp/)-[b2277](https://github.com/ggerganov/llama.cpp/releases/tag/b2277).
27
+
28
+ For --imatrix data, `imatrix-Prodigy_7B-F16.dat` was used.
29
+
30
+ # Original model information:
31
+
32
+ # Wing
33
+
34
+ ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/626dfb8786671a29c715f8a9/S-E_CADzfAg3xaVX01rdx.jpeg)
35
+
36
+ This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
37
+
38
+ ## Merge Details
39
+ ### Merge Method
40
+
41
+ This model was merged using the SLERP merge method.
42
+
43
+ ### Models Merged
44
+
45
+ The following models were included in the merge:
46
+ * [macadeliccc/WestLake-7B-v2-laser-truthy-dpo](https://huggingface.co/macadeliccc/WestLake-7B-v2-laser-truthy-dpo)
47
+ * [ChaoticNeutrals/This_is_fine_7B](https://huggingface.co/ChaoticNeutrals/This_is_fine_7B)
48
+
49
+ ### Configuration
50
+
51
+ The following YAML configuration was used to produce this model:
52
+
53
+ ```yaml
54
+ slices:
55
+ - sources:
56
+ - model: ChaoticNeutrals/This_is_fine_7B
57
+ layer_range: [0, 32]
58
+ - model: macadeliccc/WestLake-7B-v2-laser-truthy-dpo
59
+ layer_range: [0, 32]
60
+ merge_method: slerp
61
+ base_model: macadeliccc/WestLake-7B-v2-laser-truthy-dpo
62
+ parameters:
63
+ t:
64
+ - filter: self_attn
65
+ value: [0, 0.5, 0.3, 0.7, 1]
66
+ - filter: mlp
67
+ value: [1, 0.5, 0.7, 0.3, 0]
68
+ - value: 0.5
69
+ dtype: float16
70
+ ```