File size: 2,199 Bytes
4c48b58
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
---
language:
- fr
- it
- de
- es
- en
license: apache-2.0
base_model: mistralai/Mixtral-8x7B-Instruct-v0.1
inference:
  parameters:
    temperature: 0.5
widget:
- messages:
  - role: user
    content: What is your favorite condiment?
extra_gated_description: If you want to learn more about how we process your personal
  data, please read our <a href="https://mistral.ai/terms/">Privacy Policy</a>.
tags:
- llama-cpp
- gguf
---

                # Supa-AI/Mixtral-8x7B-Instruct-v0.1-gguf
                This model was converted to GGUF format from [`mistralai/Mixtral-8x7B-Instruct-v0.1`](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) using llama.cpp.
                Refer to the [original model card](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) for more details on the model.

                ## Available Versions
                - `Mixtral-8x7B-Instruct-v0.1.q4_0.gguf` (q4_0)
- `Mixtral-8x7B-Instruct-v0.1.q4_1.gguf` (q4_1)
- `Mixtral-8x7B-Instruct-v0.1.q5_0.gguf` (q5_0)
- `Mixtral-8x7B-Instruct-v0.1.q5_1.gguf` (q5_1)
- `Mixtral-8x7B-Instruct-v0.1.q8_0.gguf` (q8_0)
- `Mixtral-8x7B-Instruct-v0.1.q3_k_s.gguf` (q3_K_S)
- `Mixtral-8x7B-Instruct-v0.1.q3_k_m.gguf` (q3_K_M)
- `Mixtral-8x7B-Instruct-v0.1.q3_k_l.gguf` (q3_K_L)
- `Mixtral-8x7B-Instruct-v0.1.q4_k_s.gguf` (q4_K_S)
- `Mixtral-8x7B-Instruct-v0.1.q4_k_m.gguf` (q4_K_M)
- `Mixtral-8x7B-Instruct-v0.1.q5_k_s.gguf` (q5_K_S)
- `Mixtral-8x7B-Instruct-v0.1.q5_k_m.gguf` (q5_K_M)
- `Mixtral-8x7B-Instruct-v0.1.q6_k.gguf` (q6_K)

                ## Use with llama.cpp
                Replace `FILENAME` with one of the above filenames.

                ### CLI:
                ```bash
                llama-cli --hf-repo Supa-AI/Mixtral-8x7B-Instruct-v0.1-gguf --hf-file FILENAME -p "Your prompt here"
                ```

                ### Server:
                ```bash
                llama-server --hf-repo Supa-AI/Mixtral-8x7B-Instruct-v0.1-gguf --hf-file FILENAME -c 2048
                ```

                ## Model Details
                - **Original Model:** [mistralai/Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1)
                - **Format:** GGUF