File size: 2,817 Bytes
d336587
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1d36eb2
 
 
 
 
 
 
 
 
 
d336587
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
---
base_model:
- macadeliccc/WestLake-7B-v2-laser-truthy-dpo
- ChaoticNeutrals/This_is_fine_7B
library_name: transformers
tags:
- mistral
- quantized
- text-generation-inference
- mergekit
- merge
pipeline_tag: text-generation
inference: false
---

# **GGUF-Imatrix quantizations for [ChaoticNeutrals/Prodigy_7B](https://huggingface.co/ChaoticNeutrals/Prodigy_7B/).**

# What does "Imatrix" mean?

It stands for **Importance Matrix**, a technique used to improve the quality of quantized models.

The **Imatrix** is calculated based on calibration data, and it helps determine the importance of different model activations during the quantization process. The idea is to preserve the most important information during quantization, which can help reduce the loss of model performance.

One of the benefits of using an Imatrix is that it can lead to better model performance, especially when the calibration data is diverse.

More information: [[1]](https://github.com/ggerganov/llama.cpp/discussions/5006) [[2]](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384)

*If you want any specific quantization to be added, feel free to ask.*

All credits belong to the [creator](https://huggingface.co/ChaoticNeutrals/).

`Base⇢ GGUF(F16)⇢ Imatrix-Data(F16)⇢ GGUF(Imatrix-Quants)`

The new **IQ3_S** quant-option has shown to be better than the old Q3_K_S, so I added that instead of the later. Only supported in `koboldcpp-1.59.1` or higher.

Using [llama.cpp](https://github.com/ggerganov/llama.cpp/)-[b2277](https://github.com/ggerganov/llama.cpp/releases/tag/b2277).

For --imatrix data, `imatrix-Prodigy_7B-F16.dat` was used.

# Original model information:

# Wing

![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/626dfb8786671a29c715f8a9/S-E_CADzfAg3xaVX01rdx.jpeg)

This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).

## Merge Details
### Merge Method

This model was merged using the SLERP merge method.

### Models Merged

The following models were included in the merge:
* [macadeliccc/WestLake-7B-v2-laser-truthy-dpo](https://huggingface.co/macadeliccc/WestLake-7B-v2-laser-truthy-dpo)
* [ChaoticNeutrals/This_is_fine_7B](https://huggingface.co/ChaoticNeutrals/This_is_fine_7B)

### Configuration

The following YAML configuration was used to produce this model:

```yaml
slices:
  - sources:
      - model: ChaoticNeutrals/This_is_fine_7B
        layer_range: [0, 32]
      - model: macadeliccc/WestLake-7B-v2-laser-truthy-dpo
        layer_range: [0, 32]
merge_method: slerp
base_model: macadeliccc/WestLake-7B-v2-laser-truthy-dpo
parameters:
  t:
    - filter: self_attn
      value: [0, 0.5, 0.3, 0.7, 1]
    - filter: mlp
      value: [1, 0.5, 0.7, 0.3, 0]
    - value: 0.5
dtype: float16
```