Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
FallenMerick
/
Space-Whale-Lite-13B-GGUF
like
0
Text Generation
GGUF
quantized
4-bit precision
5-bit
6-bit
8-bit precision
GGUF
Merge
frankenmerge
Model card
Files
Files and versions
Community
Deploy
Use this model
main
Space-Whale-Lite-13B-GGUF
Ctrl+K
Ctrl+K
1 contributor
History:
6 commits
FallenMerick
Create README.md
0f588b8
verified
about 1 year ago
.gitattributes
Safe
1.79 kB
Upload Space-Whale-Lite-13B-Q8_0.gguf
about 1 year ago
README.md
Safe
348 Bytes
Create README.md
about 1 year ago
Space-Whale-Lite-13B-Q4_K_M.gguf
Safe
7.87 GB
LFS
Upload Space-Whale-Lite-13B-Q4_K_M.gguf
about 1 year ago
Space-Whale-Lite-13B-Q5_K_M.gguf
Safe
9.23 GB
LFS
Upload Space-Whale-Lite-13B-Q5_K_M.gguf
about 1 year ago
Space-Whale-Lite-13B-Q6_K.gguf
Safe
10.7 GB
LFS
Upload Space-Whale-Lite-13B-Q6_K.gguf
about 1 year ago
Space-Whale-Lite-13B-Q8_0.gguf
Safe
13.8 GB
LFS
Upload Space-Whale-Lite-13B-Q8_0.gguf
about 1 year ago