Hugging Face
Models
Datasets
Spaces
Posts
Docs
Solutions
Pricing
Log In
Sign Up
ikawrakow
/
mixtral-8x7b-quantized-gguf
like
7
GGUF
Inference Endpoints
License:
apache-2.0
Model card
Files
Files and versions
Community
2
Deploy
Use this model
dfdf4f1
mixtral-8x7b-quantized-gguf
1 contributor
History:
4 commits
ikawrakow
Adding legacy llama.cpp quants
dfdf4f1
10 months ago
.gitattributes
1.56 kB
Adding Mixtral quantized models
10 months ago
README.md
1.53 kB
Update README.md
10 months ago
mixtral-8x7b-q2k.gguf
15.4 GB
LFS
Adding Mixtral quantized models
10 months ago
mixtral-8x7b-q3k-medium.gguf
22.4 GB
LFS
Adding Mixtral quantized models
10 months ago
mixtral-8x7b-q3k-small.gguf
20.3 GB
LFS
Adding Mixtral quantized models
10 months ago
mixtral-8x7b-q40.gguf
26.4 GB
LFS
Adding legacy llama.cpp quants
10 months ago
mixtral-8x7b-q41.gguf
29.3 GB
LFS
Adding legacy llama.cpp quants
10 months ago
mixtral-8x7b-q4k-medium.gguf
28.4 GB
LFS
Adding Mixtral quantized models
10 months ago
mixtral-8x7b-q4k-small.gguf
26.7 GB
LFS
Adding Mixtral quantized models
10 months ago
mixtral-8x7b-q50.gguf
32.2 GB
LFS
Adding legacy llama.cpp quants
10 months ago
mixtral-8x7b-q5k-small.gguf
32.2 GB
LFS
Adding Mixtral quantized models
10 months ago