Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
MaziyarPanahi
/
Mistral-Large-Instruct-2407-GGUF
like
20
Text Generation
GGUF
quantized
2-bit
3-bit
4-bit precision
5-bit
6-bit
8-bit precision
GGUF
imatrix
Model card
Files
Files and versions
Community
14
Use this model
ed4c6e5
Mistral-Large-Instruct-2407-GGUF
1 contributor
History:
3 commits
MaziyarPanahi
Upload folder using huggingface_hub (
#2
)
ed4c6e5
verified
7 months ago
.gitattributes
1.82 kB
Upload folder using huggingface_hub (#2)
7 months ago
Mistral-Large-Instruct-2407.IQ1_M.gguf
28.4 GB
LFS
Upload folder using huggingface_hub (#2)
7 months ago
Mistral-Large-Instruct-2407.IQ1_S.gguf
26 GB
LFS
Upload folder using huggingface_hub (#2)
7 months ago
Mistral-Large-Instruct-2407.IQ2_XS.gguf
36.1 GB
LFS
Upload folder using huggingface_hub (#2)
7 months ago
Mistral-Large-Instruct-2407.Q2_K.gguf
45.2 GB
LFS
Upload folder using huggingface_hub (#2)
7 months ago
README.md
3.06 kB
Create README.md (#3)
7 months ago
main.log
22.7 kB
Upload folder using huggingface_hub (#2)
7 months ago