Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
batmac
/
gpt2-gguf
like
1
GGUF
Inference Endpoints
Model card
Files
Files and versions
Community
Deploy
Use this model
main
gpt2-gguf
1 contributor
History:
3 commits
batmac
Upload README.md with huggingface_hub
c1da8c3
verified
9 months ago
.gitattributes
Safe
1.84 kB
Upload folder using huggingface_hub
9 months ago
README.md
Safe
19 Bytes
Upload README.md with huggingface_hub
9 months ago
ggml-model-IQ3_M.gguf
94.2 MB
LFS
Upload folder using huggingface_hub
9 months ago
ggml-model-IQ3_S.gguf
90.1 MB
LFS
Upload folder using huggingface_hub
9 months ago
ggml-model-IQ3_XS.gguf
89.2 MB
LFS
Upload folder using huggingface_hub
9 months ago
ggml-model-IQ3_XXS.gguf
83 MB
LFS
Upload folder using huggingface_hub
9 months ago
ggml-model-IQ4_NL.gguf
107 MB
LFS
Upload folder using huggingface_hub
9 months ago
ggml-model-IQ4_XS.gguf
103 MB
LFS
Upload folder using huggingface_hub
9 months ago
ggml-model-Q2_K.gguf
81.2 MB
LFS
Upload folder using huggingface_hub
9 months ago
ggml-model-Q3_K.gguf
97.7 MB
LFS
Upload folder using huggingface_hub
9 months ago
ggml-model-Q3_K_L.gguf
102 MB
LFS
Upload folder using huggingface_hub
9 months ago
ggml-model-Q3_K_M.gguf
97.7 MB
LFS
Upload folder using huggingface_hub
9 months ago
ggml-model-Q3_K_S.gguf
90.1 MB
LFS
Upload folder using huggingface_hub
9 months ago
ggml-model-Q4_0.gguf
107 MB
LFS
Upload folder using huggingface_hub
9 months ago
ggml-model-Q4_1.gguf
114 MB
LFS
Upload folder using huggingface_hub
9 months ago
ggml-model-Q4_K.gguf
113 MB
LFS
Upload folder using huggingface_hub
9 months ago
ggml-model-Q4_K_M.gguf
113 MB
LFS
Upload folder using huggingface_hub
9 months ago
ggml-model-Q4_K_S.gguf
107 MB
LFS
Upload folder using huggingface_hub
9 months ago
ggml-model-Q5_0.gguf
122 MB
LFS
Upload folder using huggingface_hub
9 months ago
ggml-model-Q5_1.gguf
130 MB
LFS
Upload folder using huggingface_hub
9 months ago
ggml-model-Q5_K.gguf
127 MB
LFS
Upload folder using huggingface_hub
9 months ago
ggml-model-Q5_K_M.gguf
127 MB
LFS
Upload folder using huggingface_hub
9 months ago
ggml-model-Q5_K_S.gguf
122 MB
LFS
Upload folder using huggingface_hub
9 months ago
ggml-model-Q6_K.gguf
138 MB
LFS
Upload folder using huggingface_hub
9 months ago
ggml-model-Q8_0.gguf
178 MB
LFS
Upload folder using huggingface_hub
9 months ago
ggml-model-f16.gguf
330 MB
LFS
Upload folder using huggingface_hub
9 months ago