This is a converted model to GGUF from meta-llama/Llama-3.1-8B-Instruct quantized to Q2_K using llama.cpp library.

Downloads last month
3
GGUF
Model size
8.03B params
Architecture
llama

2-bit

Inference API
Unable to determine this model's library. Check the docs .