Difference between this repo and the official GGUF repo?
I'm new to all of this quantization, so excuse my question if it's obvious. What is the difference between this repo's GGUF models and e.g. the official Qwen GGUF model? (https://huggingface.co/Qwen/Qwen2.5-Coder-32B-Instruct-GGUF) Or from e.g. unsloth?
I found "All quants made using imatrix option with dataset from here(link)" but I can't figure out what that means or if it contributes to any differences. Or is it "just" to have a(nother) repo with quants? Which would be a valid point in itself.
Edit: I see that your repo also has more quantization types. My question is more on the differences between the same quantization, for example Q4_K_M, which both repos have.
PS: Thank you for providing all these models!
Imatrix is an attempt to improve overall quality of the model while using the same bits per weight and structure by using a corpus of data (the one I linked) to count the activations of each weight in the model, thus finding the "important" weights. This information is then used to make a more informed decision when selecting the rounding values like scale and offset, so that the important weights are more accurately represented in the final result
You can find a bit more info from the original Reddit discussion here: