Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up

ikawrakow
/
mixtral-8x7b-quantized-gguf

GGUF
Model card Files Files and versions Community
2
mixtral-8x7b-quantized-gguf
Ctrl+K
Ctrl+K
  • 1 contributor
History: 6 commits
ikawrakow's picture
ikawrakow
Adding IQ3_XXS and fixed _M models
c60d8b0 over 1 year ago
  • .gitattributes
    1.56 kB
    Adding Mixtral quantized models over 1 year ago
  • README.md
    1.53 kB
    Update README.md over 1 year ago
  • mixtral-8x7b-iq3-xxs.gguf
    18.3 GB
    LFS
    Adding IQ3_XXS and fixed _M models over 1 year ago
  • mixtral-8x7b-q2k.gguf
    15.4 GB
    LFS
    Adding Mixtral quantized models over 1 year ago
  • mixtral-8x7b-q3k-medium.gguf
    22.5 GB
    LFS
    Adding IQ3_XXS and fixed _M models over 1 year ago
  • mixtral-8x7b-q3k-small.gguf
    20.3 GB
    LFS
    Adding Mixtral quantized models over 1 year ago
  • mixtral-8x7b-q40.gguf
    26.4 GB
    LFS
    Adding legacy llama.cpp quants over 1 year ago
  • mixtral-8x7b-q41.gguf
    29.3 GB
    LFS
    Adding legacy llama.cpp quants over 1 year ago
  • mixtral-8x7b-q4k-medium.gguf
    28.4 GB
    LFS
    Adding IQ3_XXS and fixed _M models over 1 year ago
  • mixtral-8x7b-q4k-small.gguf
    26.7 GB
    LFS
    Adding Mixtral quantized models over 1 year ago
  • mixtral-8x7b-q50.gguf
    32.2 GB
    LFS
    Adding legacy llama.cpp quants over 1 year ago
  • mixtral-8x7b-q5k-small.gguf
    32.2 GB
    LFS
    Adding Mixtral quantized models over 1 year ago