Update README.md
Browse files
README.md
CHANGED
@@ -19,7 +19,7 @@ model-index:
|
|
19 |
# Quant Infos
|
20 |
|
21 |
- quants done with an importance matrix for improved quantization loss
|
22 |
-
- ggufs & imatrix generated from bf16 for "optimal" accuracy loss
|
23 |
- Wide coverage of different gguf quant types from Q\_8\_0 down to IQ1\_S
|
24 |
- Quantized with [llama.cpp](https://github.com/ggerganov/llama.cpp) commit [dc685be46622a8fabfd57cfa804237c8f15679b8](https://github.com/ggerganov/llama.cpp/commit/dc685be46622a8fabfd57cfa804237c8f15679b8) (master as of 2024-05-12)
|
25 |
- Imatrix generated with [this](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384) multi-purpose dataset.
|
|
|
19 |
# Quant Infos
|
20 |
|
21 |
- quants done with an importance matrix for improved quantization loss
|
22 |
+
- ggufs & imatrix generated from bf16 for "optimal" accuracy loss
|
23 |
- Wide coverage of different gguf quant types from Q\_8\_0 down to IQ1\_S
|
24 |
- Quantized with [llama.cpp](https://github.com/ggerganov/llama.cpp) commit [dc685be46622a8fabfd57cfa804237c8f15679b8](https://github.com/ggerganov/llama.cpp/commit/dc685be46622a8fabfd57cfa804237c8f15679b8) (master as of 2024-05-12)
|
25 |
- Imatrix generated with [this](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384) multi-purpose dataset.
|