Supa-AI's picture
Upload README.md with huggingface_hub
4c48b58 verified
---
language:
- fr
- it
- de
- es
- en
license: apache-2.0
base_model: mistralai/Mixtral-8x7B-Instruct-v0.1
inference:
parameters:
temperature: 0.5
widget:
- messages:
- role: user
content: What is your favorite condiment?
extra_gated_description: If you want to learn more about how we process your personal
data, please read our <a href="https://mistral.ai/terms/">Privacy Policy</a>.
tags:
- llama-cpp
- gguf
---
# Supa-AI/Mixtral-8x7B-Instruct-v0.1-gguf
This model was converted to GGUF format from [`mistralai/Mixtral-8x7B-Instruct-v0.1`](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) using llama.cpp.
Refer to the [original model card](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) for more details on the model.
## Available Versions
- `Mixtral-8x7B-Instruct-v0.1.q4_0.gguf` (q4_0)
- `Mixtral-8x7B-Instruct-v0.1.q4_1.gguf` (q4_1)
- `Mixtral-8x7B-Instruct-v0.1.q5_0.gguf` (q5_0)
- `Mixtral-8x7B-Instruct-v0.1.q5_1.gguf` (q5_1)
- `Mixtral-8x7B-Instruct-v0.1.q8_0.gguf` (q8_0)
- `Mixtral-8x7B-Instruct-v0.1.q3_k_s.gguf` (q3_K_S)
- `Mixtral-8x7B-Instruct-v0.1.q3_k_m.gguf` (q3_K_M)
- `Mixtral-8x7B-Instruct-v0.1.q3_k_l.gguf` (q3_K_L)
- `Mixtral-8x7B-Instruct-v0.1.q4_k_s.gguf` (q4_K_S)
- `Mixtral-8x7B-Instruct-v0.1.q4_k_m.gguf` (q4_K_M)
- `Mixtral-8x7B-Instruct-v0.1.q5_k_s.gguf` (q5_K_S)
- `Mixtral-8x7B-Instruct-v0.1.q5_k_m.gguf` (q5_K_M)
- `Mixtral-8x7B-Instruct-v0.1.q6_k.gguf` (q6_K)
## Use with llama.cpp
Replace `FILENAME` with one of the above filenames.
### CLI:
```bash
llama-cli --hf-repo Supa-AI/Mixtral-8x7B-Instruct-v0.1-gguf --hf-file FILENAME -p "Your prompt here"
```
### Server:
```bash
llama-server --hf-repo Supa-AI/Mixtral-8x7B-Instruct-v0.1-gguf --hf-file FILENAME -c 2048
```
## Model Details
- **Original Model:** [mistralai/Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1)
- **Format:** GGUF