File size: 954 Bytes
c6cd471 ce56e8c 6ff6ba9 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 |
---
license: apache-2.0
tags:
- not-for-all-audiences
- writing
- roleplay
- gguf
- gguf-imatrix
base_model:
- nakodanei/Blue-Orchid-2x7b
model_type: mixtral
quantized_by: Green-Sky
language:
- en
---
llama.cpp conversion of https://huggingface.co/nakodanei/Blue-Orchid-2x7b/
except for f16 and q8_0, every quant is using the `merge.imatrix`
`merge.imatrix` is a merge of `kalomaze-group_10_merged.172chunks.imatrix` and `wiki.train.400chunks.imatrix`, which took ~10min + ~20min to calulate on my machine.
full wiki.train would have taken 10h
for more info on imatrix handling see https://github.com/ggerganov/llama.cpp/pull/5302
### ppl (512 wiki.test, 300chunks)
| quant | ppl (lower is better) |
|----------------|-----|
| f16(baseline) | xxx |
| q8_0 | xxx |
| q5_k_m | xxx |
| q4_k_m | xxx |
| iq3_xxs(merge) | 6.1984 +/- 0.05475 |
| q2_k | xxx |
| iq2_xs | xxx |
| iq2_xxs | xxx | |