Hugging Face
Models
Datasets
Spaces
Posts
Docs
Solutions
Pricing
Log In
Sign Up
PrunaAI
/
codellama-CodeLlama-13b-Python-hf-AWQ-4bit-smashed
like
0
Follow
Pruna AI
114
Text Generation
Transformers
Safetensors
llama
pruna-ai
text-generation-inference
Inference Endpoints
4-bit precision
awq
Model card
Files
Files and versions
Community
1
Train
Deploy
Use this model
main
codellama-CodeLlama-13b-Python-hf-AWQ-4bit-smashed
1 contributor
History:
5 commits
sharpenb
Update README.md
65ba33c
verified
3 months ago
.gitattributes
1.52 kB
initial commit
5 months ago
README.md
5.39 kB
Update README.md
4 months ago
config.json
895 Bytes
Upload folder using huggingface_hub (#1)
5 months ago
generation_config.json
132 Bytes
Upload folder using huggingface_hub (#1)
5 months ago
model-00001-of-00002.safetensors
5 GB
LFS
Upload folder using huggingface_hub (#1)
5 months ago
model-00002-of-00002.safetensors
2.25 GB
LFS
Upload folder using huggingface_hub (#1)
5 months ago
model.safetensors.index.json
79.3 kB
Upload folder using huggingface_hub (#1)
5 months ago
smash_config.json
1.03 kB
Upload folder using huggingface_hub (#1)
5 months ago
special_tokens_map.json
515 Bytes
Upload folder using huggingface_hub (#1)
5 months ago
tokenizer.json
1.84 MB
Upload folder using huggingface_hub (#1)
5 months ago
tokenizer_config.json
1.84 kB
Upload folder using huggingface_hub (#1)
5 months ago