Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
kaitchup
/
Mistral-7B-awq-4bit
like
1
Follow
The Kaitchup
64
Text Generation
Transformers
Safetensors
mistral
text-generation-inference
4-bit precision
awq
Model card
Files
Files and versions
Community
Train
Deploy
Use this model
main
Mistral-7B-awq-4bit
/
README.md
bnjmnmarie
Create README.md
bc9fd35
over 1 year ago
preview
code
|
raw
Copy download link
history
blame
contribute
delete
Safe
43 Bytes
Mistral 7B quantized in 4-bit with AutoAWQ.