QuantFactory/mistral-nemo-gutenberg-12B-v3-GGUF
This is quantized version of nbeerbower/mistral-nemo-gutenberg-12B-v3 created using llama.cpp
Original Model Card
mistral-nemo-gutenberg-12B-v3
intervitens/mini-magnum-12b-v1.1 finetuned on jondurbin/gutenberg-dpo-v0.1.
Method
Finetuned using an A100 on Google Colab for 3 epochs.
- Downloads last month
- 77
Hardware compatibility
Log In
to view the estimation
2-bit
3-bit
4-bit
5-bit
6-bit
8-bit
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for QuantFactory/mistral-nemo-gutenberg-12B-v3-GGUF
Base model
intervitens/mini-magnum-12b-v1.1