This is a converted model to GGUF from nvidia/Mistral-NeMo-Minitron-8B-Instruct quantized to Q2_K using llama.cpp library.

Downloads last month
69
GGUF
Model size
8.41B params
Architecture
llama
Hardware compatibility
Log In to view the estimation

2-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for Manel/Mistral-NeMo-Minitron-8B-Instruct-Q2_K-GGUF