This is a converted model to GGUF from meta-llama/Llama-3.1-8B-Instruct quantized to Q2_K using llama.cpp library.

Downloads last month
4
GGUF
Model size
8.03B params
Architecture
llama
Hardware compatibility
Log In to view the estimation

2-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support