--- license: apache-2.0 tags: - mistral - conversational - text-generation-inference base_model: intervitens/mini-magnum-12b-v1.1 library_name: transformers --- > [!WARNING] > **Sampling:**
> Mistral-Nemo-12B is very sensitive to the temperature sampler, try values near **0.3** at first or else you will get some weird results. This is mentioned by MistralAI at their [Transformers](https://huggingface.co/mistralai/Mistral-Nemo-Instruct-2407#transformers) section. > **Original Model:** [intervitens/mini-magnum-12b-v1.1](https://huggingface.co/intervitens/mini-magnum-12b-v1.1) **How to Use:** [llama.cpp](https://github.com/ggerganov/llama.cpp) **Original Model License:** Apache 2.0 **Release Used:** [b3452](https://github.com/ggerganov/llama.cpp/releases/tag/b3452) # Quants PPL = Perplexity, lower is better
Comparisons are done as Q?_? Llama-3-8B against FP16 Llama-3-8B, recommended as a guideline and not as fact. | Quant Type | Note | Size | | ---- | ---- | ---- | | [Q2_K](https://huggingface.co/starble-dev/mini-magnum-12b-v1.1-GGUF/blob/main/mini-magnum-12b-v1.1-Q2_K.gguf) | +3.5199 ppl @ Llama-3-8B | ? GB | | [Q3_K_S](https://huggingface.co/starble-dev/mini-magnum-12b-v1.1-GGUF/blob/main/mini-magnum-12b-v1.1-Q3_K_S.gguf) | +1.6321 ppl @ Llama-3-8B | ? GB | | [Q3_K_M](https://huggingface.co/starble-dev/mini-magnum-12b-v1.1-GGUF/blob/main/mini-magnum-12b-v1.1-Q3_K_M.gguf) | +0.6569 ppl @ Llama-3-8B | ? GB | | [Q3_K_L](https://huggingface.co/starble-dev/mini-magnum-12b-v1.1-GGUF/blob/main/mini-magnum-12b-v1.1-Q3_K_L.gguf) | +0.5562 ppl @ Llama-3-8B | ? GB | | [Q4_K_S](https://huggingface.co/starble-dev/mini-magnum-12b-v1.1-GGUF/blob/main/mini-magnum-12b-v1.1-Q4_K_S.gguf) | +0.5562 ppl @ Llama-3-8B | ? GB | | [Q4_K_M](https://huggingface.co/starble-dev/mini-magnum-12b-v1.1-GGUF/blob/main/mini-magnum-12b-v1.1-Q4_K_M.gguf) | +0.1754 ppl @ Llama-3-8B | ? GB | | [Q5_K_S](https://huggingface.co/starble-dev/mini-magnum-12b-v1.1-GGUF/blob/main/mini-magnum-12b-v1.1-Q5_K_S.gguf) | +0.1049 ppl @ Llama-3-8B | ? GB | | [Q5_K_M](https://huggingface.co/starble-dev/mini-magnum-12b-v1.1-GGUF/blob/main/mini-magnum-12b-v1.1-Q5_K_M.gguf) | +0.0569 ppl @ Llama-3-8B | ? GB | | [Q6_K](https://huggingface.co/starble-dev/mini-magnum-12b-v1.1-GGUF/blob/main/mini-magnum-12b-v1.1-Q6_K.gguf) | +0.0217 ppl @ Llama-3-8B | ? GB | | [Q8_0](https://huggingface.co/starble-dev/mini-magnum-12b-v1.1-GGUF/blob/main/mini-magnum-12b-v1.1-Q8_0.gguf) | +0.0026 ppl @ Llama-3-8B | ? GB |