Hugging Face
Models
Datasets
Spaces
Posts
Docs
Solutions
Pricing
Log In
Sign Up
NikolayKozloff
/
Meta-Llama-3-8B-Instruct-bf16-correct-pre-tokenizer-and-EOS-token-Q8_0-Q6_k-Q4_K_M-GGUF
like
9
GGUF
Inference Endpoints
conversational
Model card
Files
Files and versions
Community
1
Deploy
Use this model
No model card
New: Create and edit this model card directly on the website!
Contribute a Model Card
Downloads last month
299
GGUF
Model size
8.03B params
Architecture
llama
4-bit
Q4_K_M
6-bit
Q6_K
8-bit
Q8_0
Inference API
Unable to determine this model's library. Check the
docs
.