This is a converted model to GGUF from nvidia/Mistral-NeMo-Minitron-8B-Instruct
quantized to Q2_K
using llama.cpp
library.
- Downloads last month
- 69
Hardware compatibility
Log In
to view the estimation
2-bit
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for Manel/Mistral-NeMo-Minitron-8B-Instruct-Q2_K-GGUF
Base model
nvidia/Mistral-NeMo-Minitron-8B-Base
Finetuned
nvidia/Mistral-NeMo-Minitron-8B-Instruct