Transformers
GGUF
Eval Results
Inference Endpoints
conversational

QuantFactory Banner

QuantFactory/Flammades-Mistral-Nemo-12B-GGUF

This is quantized version of flammenai/Flammades-Mistral-Nemo-12B created using llama.cpp

Original Model Card

Flammades-Mistral-Nemo-12B

nbeerbower/Mistral-Nemo-Gutenberg-Doppel-12B-v2 finetuned on flammenai/Date-DPO-NoAsterisks and jondurbin/truthy-dpo-v0.1.

Method

ORPO tuned with 2x RTX 3090 for 3 epochs.

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 22.34
IFEval (0-Shot) 38.42
BBH (3-Shot) 32.39
MATH Lvl 5 (4-Shot) 6.19
GPQA (0-shot) 7.16
MuSR (0-shot) 20.31
MMLU-PRO (5-shot) 29.57
Downloads last month
480
GGUF
Model size
12.2B params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for QuantFactory/Flammades-Mistral-Nemo-12B-GGUF

Datasets used to train QuantFactory/Flammades-Mistral-Nemo-12B-GGUF

Evaluation results