Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
ChristianAzinn
/
mixtral-8x22b-v0.1-imatrix
like
0
Text Generation
Transformers
GGUF
English
quantized
2-bit
3-bit
4-bit precision
5-bit
6-bit
8-bit precision
16-bit
GGUF
mixtral
Mixture of Experts
License:
apache-2.0
Model card
Files
Files and versions
Community
Train
Deploy
Use this model
ce07e86
mixtral-8x22b-v0.1-imatrix
Ctrl+K
Ctrl+K
1 contributor
History:
2 commits
ChristianAzinn
upload q4_k
ce07e86
verified
about 1 year ago
.gitattributes
2.6 kB
upload q4_k
about 1 year ago
mixtral-8x22b-v0.1-imatrix.q4_k_m-00001-of-00006.gguf
Safe
954 MB
LFS
upload q4_k
about 1 year ago
mixtral-8x22b-v0.1-imatrix.q4_k_m-00002-of-00006.gguf
Safe
803 MB
LFS
upload q4_k
about 1 year ago
mixtral-8x22b-v0.1-imatrix.q4_k_m-00003-of-00006.gguf
Safe
843 MB
LFS
upload q4_k
about 1 year ago
mixtral-8x22b-v0.1-imatrix.q4_k_m-00004-of-00006.gguf
Safe
850 MB
LFS
upload q4_k
about 1 year ago
mixtral-8x22b-v0.1-imatrix.q4_k_m-00005-of-00006.gguf
Safe
41.6 GB
LFS
upload q4_k
about 1 year ago
mixtral-8x22b-v0.1-imatrix.q4_k_m-00006-of-00006.gguf
Safe
40.5 GB
LFS
upload q4_k
about 1 year ago
mixtral-8x22b-v0.1-imatrix.q4_k_s-00001-of-00006.gguf
Safe
954 MB
LFS
upload q4_k
about 1 year ago
mixtral-8x22b-v0.1-imatrix.q4_k_s-00002-of-00006.gguf
Safe
803 MB
LFS
upload q4_k
about 1 year ago
mixtral-8x22b-v0.1-imatrix.q4_k_s-00003-of-00006.gguf
Safe
843 MB
LFS
upload q4_k
about 1 year ago
mixtral-8x22b-v0.1-imatrix.q4_k_s-00004-of-00006.gguf
Safe
850 MB
LFS
upload q4_k
about 1 year ago
mixtral-8x22b-v0.1-imatrix.q4_k_s-00005-of-00006.gguf
Safe
39.4 GB
LFS
upload q4_k
about 1 year ago
mixtral-8x22b-v0.1-imatrix.q4_k_s-00006-of-00006.gguf
Safe
37.6 GB
LFS
upload q4_k
about 1 year ago