Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
steampunque
/
Mistral-Small-3.2-24B-Instruct-2506-Hybrid-GGUF
like
0
GGUF
Mistral
Mistral-Small
GGUF
quantized
4-bit precision
License:
apache-2.0
Model card
Files
Files and versions
Community
main
Mistral-Small-3.2-24B-Instruct-2506-Hybrid-GGUF
Ctrl+K
Ctrl+K
1 contributor
History:
4 commits
steampunque
Create README.md
33d2e39
verified
13 days ago
.gitattributes
Safe
1.69 kB
Upload Mistral-Small-3.2-24B-Instruct-2506.mmproj.gguf with huggingface_hub
13 days ago
Mistral-Small-3.2-24B-Instruct-2506.Q4_K_H.gguf
Safe
12.7 GB
LFS
Upload Mistral-Small-3.2-24B-Instruct-2506.Q4_K_H.gguf with huggingface_hub
13 days ago
Mistral-Small-3.2-24B-Instruct-2506.mmproj.gguf
Safe
878 MB
LFS
Upload Mistral-Small-3.2-24B-Instruct-2506.mmproj.gguf with huggingface_hub
13 days ago
README.md
Safe
3.23 kB
Create README.md
13 days ago