Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
larenspear
/
mamba-130m-hf-GGUF
like
0
GGUF
Model card
Files
Files and versions
Community
Deploy
Use this model
b8587c6
mamba-130m-hf-GGUF
Ctrl+K
Ctrl+K
1 contributor
History:
2 commits
larenspear
GGUF Quantization
b8587c6
12 months ago
.gitattributes
Safe
1.52 kB
initial commit
12 months ago
mamba-130m-hf-f16.gguf
Safe
271 MB
LFS
GGUF Quantization
12 months ago
mamba-130m-hf-q2_k.gguf
Safe
83.7 MB
LFS
GGUF Quantization
12 months ago
mamba-130m-hf-q3_k_l.gguf
Safe
92.3 MB
LFS
GGUF Quantization
12 months ago
mamba-130m-hf-q3_k_m.gguf
Safe
92.3 MB
LFS
GGUF Quantization
12 months ago
mamba-130m-hf-q3_k_s.gguf
Safe
92.3 MB
LFS
GGUF Quantization
12 months ago
mamba-130m-hf-q4_k_m.gguf
Safe
104 MB
LFS
GGUF Quantization
12 months ago
mamba-130m-hf-q4_k_s.gguf
Safe
104 MB
LFS
GGUF Quantization
12 months ago
mamba-130m-hf-q5_k_m.gguf
Safe
114 MB
LFS
GGUF Quantization
12 months ago
mamba-130m-hf-q5_k_s.gguf
Safe
114 MB
LFS
GGUF Quantization
12 months ago
mamba-130m-hf-q6_k.gguf
Safe
125 MB
LFS
GGUF Quantization
12 months ago
mamba-130m-hf-q8_0.gguf
Safe
155 MB
LFS
GGUF Quantization
12 months ago