---
base_model: mistralai/Mistral-7B-v0.1
library_name: transformers
tags:
- mergekit
- merge
- Mistral
- instruct
- finetune
- chatml
- DPO
- RLHF
- gpt4
- synthetic data
- distillation
- function calling
- json mode
model-index:
- name: Hermes-2-Pro-Mistral-10.7B
results: []
license: apache-2.0
language:
- en
datasets:
- teknium/OpenHermes-2.5
widget:
- example_title: Hermes 2 Pro
messages:
- role: system
content: You are a sentient, superintelligent artificial general intelligence, here to teach and assist me.
- role: user
content: Write a short story about Goku discovering kirby has teamed up with Majin Buu to destroy the world.
quantized_by: bartowski
pipeline_tag: text-generation
---
## Llamacpp Quantizations of Hermes-2-Pro-Mistral-10.7B
Using llama.cpp release b2536 for quantization.
Original model: https://huggingface.co/Joseph717171/Hermes-2-Pro-Mistral-10.7B
Download a file (not the whole branch) from below:
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Hermes-2-Pro-Mistral-10.7B-Q8_0.gguf](https://huggingface.co/bartowski/Hermes-2-Pro-Mistral-10.7B-GGUF/blob/main/Hermes-2-Pro-Mistral-10.7B-Q8_0.gguf) | Q8_0 | 11.40GB | Extremely high quality, generally unneeded but max available quant. |
| [Hermes-2-Pro-Mistral-10.7B-Q6_K.gguf](https://huggingface.co/bartowski/Hermes-2-Pro-Mistral-10.7B-GGUF/blob/main/Hermes-2-Pro-Mistral-10.7B-Q6_K.gguf) | Q6_K | 8.80GB | Very high quality, near perfect, *recommended*. |
| [Hermes-2-Pro-Mistral-10.7B-Q5_K_M.gguf](https://huggingface.co/bartowski/Hermes-2-Pro-Mistral-10.7B-GGUF/blob/main/Hermes-2-Pro-Mistral-10.7B-Q5_K_M.gguf) | Q5_K_M | 7.59GB | High quality, very usable. |
| [Hermes-2-Pro-Mistral-10.7B-Q5_K_S.gguf](https://huggingface.co/bartowski/Hermes-2-Pro-Mistral-10.7B-GGUF/blob/main/Hermes-2-Pro-Mistral-10.7B-Q5_K_S.gguf) | Q5_K_S | 7.39GB | High quality, very usable. |
| [Hermes-2-Pro-Mistral-10.7B-Q5_0.gguf](https://huggingface.co/bartowski/Hermes-2-Pro-Mistral-10.7B-GGUF/blob/main/Hermes-2-Pro-Mistral-10.7B-Q5_0.gguf) | Q5_0 | 7.39GB | High quality, older format, generally not recommended. |
| [Hermes-2-Pro-Mistral-10.7B-Q4_K_M.gguf](https://huggingface.co/bartowski/Hermes-2-Pro-Mistral-10.7B-GGUF/blob/main/Hermes-2-Pro-Mistral-10.7B-Q4_K_M.gguf) | Q4_K_M | 6.46GB | Good quality, uses about 4.83 bits per weight. |
| [Hermes-2-Pro-Mistral-10.7B-Q4_K_S.gguf](https://huggingface.co/bartowski/Hermes-2-Pro-Mistral-10.7B-GGUF/blob/main/Hermes-2-Pro-Mistral-10.7B-Q4_K_S.gguf) | Q4_K_S | 6.11GB | Slightly lower quality with small space savings. |
| [Hermes-2-Pro-Mistral-10.7B-IQ4_NL.gguf](https://huggingface.co/bartowski/Hermes-2-Pro-Mistral-10.7B-GGUF/blob/main/Hermes-2-Pro-Mistral-10.7B-IQ4_NL.gguf) | IQ4_NL | 6.14GB | Decent quality, similar to Q4_K_S, new method of quanting, |
| [Hermes-2-Pro-Mistral-10.7B-IQ4_XS.gguf](https://huggingface.co/bartowski/Hermes-2-Pro-Mistral-10.7B-GGUF/blob/main/Hermes-2-Pro-Mistral-10.7B-IQ4_XS.gguf) | IQ4_XS | 5.82GB | Decent quality, new method with similar performance to Q4. |
| [Hermes-2-Pro-Mistral-10.7B-Q4_0.gguf](https://huggingface.co/bartowski/Hermes-2-Pro-Mistral-10.7B-GGUF/blob/main/Hermes-2-Pro-Mistral-10.7B-Q4_0.gguf) | Q4_0 | 6.07GB | Decent quality, older format, generally not recommended. |
| [Hermes-2-Pro-Mistral-10.7B-Q3_K_L.gguf](https://huggingface.co/bartowski/Hermes-2-Pro-Mistral-10.7B-GGUF/blob/main/Hermes-2-Pro-Mistral-10.7B-Q3_K_L.gguf) | Q3_K_L | 5.65GB | Lower quality but usable, good for low RAM availability. |
| [Hermes-2-Pro-Mistral-10.7B-Q3_K_M.gguf](https://huggingface.co/bartowski/Hermes-2-Pro-Mistral-10.7B-GGUF/blob/main/Hermes-2-Pro-Mistral-10.7B-Q3_K_M.gguf) | Q3_K_M | 5.19GB | Even lower quality. |
| [Hermes-2-Pro-Mistral-10.7B-IQ3_M.gguf](https://huggingface.co/bartowski/Hermes-2-Pro-Mistral-10.7B-GGUF/blob/main/Hermes-2-Pro-Mistral-10.7B-IQ3_M.gguf) | IQ3_M | 4.84GB | Medium-low quality, new method with decent performance. |
| [Hermes-2-Pro-Mistral-10.7B-IQ3_S.gguf](https://huggingface.co/bartowski/Hermes-2-Pro-Mistral-10.7B-GGUF/blob/main/Hermes-2-Pro-Mistral-10.7B-IQ3_S.gguf) | IQ3_S | 4.69GB | Lower quality, new method with decent performance, recommended over Q3 quants. |
| [Hermes-2-Pro-Mistral-10.7B-Q3_K_S.gguf](https://huggingface.co/bartowski/Hermes-2-Pro-Mistral-10.7B-GGUF/blob/main/Hermes-2-Pro-Mistral-10.7B-Q3_K_S.gguf) | Q3_K_S | 4.66GB | Low quality, not recommended. |
| [Hermes-2-Pro-Mistral-10.7B-Q2_K.gguf](https://huggingface.co/bartowski/Hermes-2-Pro-Mistral-10.7B-GGUF/blob/main/Hermes-2-Pro-Mistral-10.7B-Q2_K.gguf) | Q2_K | 4.00GB | Extremely low quality, *not* recommended.
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski