Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
kaitchup
/
Mistral-NeMo-Minitron-8B-Base-GGUF
like
0
Follow
The Kaitchup
64
GGUF
License:
cc-by-4.0
Model card
Files
Files and versions
Community
Deploy
Use this model
main
Mistral-NeMo-Minitron-8B-Base-GGUF
Ctrl+K
Ctrl+K
1 contributor
History:
4 commits
bnjmnmarie
Delete Q4_K_L_perplexity.txt
f4d12ae
verified
10 months ago
.gitattributes
Safe
1.66 kB
Upload folder using huggingface_hub
10 months ago
FP16.gguf
Safe
16.8 GB
LFS
Upload folder using huggingface_hub
10 months ago
Q4_K_M.gguf
Safe
5.15 GB
LFS
Upload folder using huggingface_hub
10 months ago
Q4_K_M_I_perplexity.txt
Safe
0 Bytes
Upload folder using huggingface_hub
10 months ago
Q4_K_M_perplexity.txt
Safe
7.06 kB
Upload folder using huggingface_hub
10 months ago
Q4_K_S.gguf
Safe
4.91 GB
LFS
Upload folder using huggingface_hub
10 months ago
Q4_K_S_I_perplexity.txt
Safe
0 Bytes
Upload folder using huggingface_hub
10 months ago
Q4_K_S_perplexity.txt
Safe
7.06 kB
Upload folder using huggingface_hub
10 months ago
README.md
Safe
30 Bytes
initial commit
10 months ago