Hugging Face
Models
Datasets
Spaces
Posts
Docs
Solutions
Pricing
Log In
Sign Up
4darsh-Dev
/
Meta-Llama-3-8B-quantized-GPTQ
like
1
Text Generation
PEFT
English
llama
llama-3-8b
llama-3-8b-quantized
llama-3-8b-autogptq
meta
quantized
4-bit precision
gptq
License:
other
Model card
Files
Files and versions
Community
Train
Use this model
main
Meta-Llama-3-8B-quantized-GPTQ
1 contributor
History:
4 commits
4darsh-Dev
updated readme
b613dae
verified
4 months ago
.gitattributes
1.52 kB
initial commit
4 months ago
README.md
557 Bytes
updated readme
4 months ago
config.json
1.03 kB
Upload of AutoGPTQ quantized model
4 months ago
gptq_model-4bit-128g.bin
pickle
Detected Pickle imports (4)
"collections.OrderedDict"
,
"torch.HalfStorage"
,
"torch._utils._rebuild_tensor_v2"
,
"torch.IntStorage"
What is a pickle import?
5.74 GB
LFS
Upload of AutoGPTQ quantized model
4 months ago
quantize_config.json
265 Bytes
Upload of AutoGPTQ quantized model
4 months ago