starble-dev's picture
Update README.md
fb27ed5 verified
|
raw
history blame
2.48 kB
metadata
license: apache-2.0
tags:
  - mistral
  - conversational
  - text-generation-inference
base_model: intervitens/mini-magnum-12b-v1.1
library_name: transformers

Sampling:
Mistral-Nemo-12B is very sensitive to the temperature sampler, try values near 0.3 at first or else you will get some weird results. This is mentioned by MistralAI at their Transformers section.

Original Model: intervitens/mini-magnum-12b-v1.1

How to Use: llama.cpp

Original Model License: Apache 2.0

Release Used: b3452

Quants

PPL = Perplexity, lower is better
Comparisons are done against FP16 Llama-3-8B, recommended as a guideline and not as fact.

Quant Type Note Size
Q2_K +3.5199 ppl @ Llama-3-8B ? GB
Q3_K_S +1.6321 ppl @ Llama-3-8B ? GB
Q3_K_M +0.6569 ppl @ Llama-3-8B ? GB
Q3_K_L +0.5562 ppl @ Llama-3-8B ? GB
Q4_K_S +0.5562 ppl @ Llama-3-8B ? GB
Q4_K_M +0.1754 ppl @ Llama-3-8B ? GB
Q5_K_S +0.1049 ppl @ Llama-3-8B ? GB
Q5_K_M +0.0569 ppl @ Llama-3-8B ? GB
Q6_K +0.0217 ppl @ Llama-3-8B ? GB
Q8_0 +0.0026 ppl @ Llama-3-8B ? GB