metadata
license: apache-2.0
This repository contains importance matrix datasets for use with the improved quantization methods recently added to llama.cpp
.
The importance matrix has been computed using wiki.train.raw
as training data.
Hope the file names are self-explanatory.
To use, after cloning this repo, for e.g. Mixtral-8x7B and Q4_K_M
quantization, use
./quantize --imatrix path_to_repo/mixtral-8x7b.imatrix path_to_model ggml-model-q4k-m.gguf Q4_K_M