This is a converted model to GGUF from meta-llama/Llama-2-7b-chat-hf quantized to Q4_0 using llama.cpp library.

Downloads last month
24
GGUF
Model size
6.74B params
Architecture
llama
Hardware compatibility
Log In to view the estimation

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support