Flora DPO

image/jpeg

Finetuned with this DPO dataset: https://huggingface.co/datasets/mlabonne/chatml_dpo_pairs

Quants available here:

https://huggingface.co/solidrust/Flora-7B-DPO-AWQ

https://huggingface.co/Test157t/ResplendentAI-Flora_DPO_7B-5bpw-exl2

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 74.26
AI2 Reasoning Challenge (25-Shot) 71.76
HellaSwag (10-Shot) 88.28
MMLU (5-Shot) 64.13
TruthfulQA (0-shot) 71.08
Winogrande (5-shot) 84.53
GSM8k (5-shot) 65.81
Downloads last month
136
Safetensors
Model size
7.24B params
Tensor type
FP16
Β·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for ResplendentAI/Flora_DPO_7B

Merges
8 models
Quantizations
3 models

Datasets used to train ResplendentAI/Flora_DPO_7B

Spaces using ResplendentAI/Flora_DPO_7B 6

Collection including ResplendentAI/Flora_DPO_7B

Evaluation results