UltraMerge-7B

This model is an experimental DPO fine-tune of automerger/YamShadow-7B on the following datasets:

  • mlabonne/truthy-dpo-v0.1
  • mlabonne/distilabel-intel-orca-dpo-pairs
  • mlabonne/chatml-OpenHermes2.5-dpo-binarized-alpha
  • mlabonne/ultrafeedback-binarized-preferences-cleaned

I have no idea about what's the best chat template. Probably Mistral-Instruct or ChatML.

Downloads last month
272
Safetensors
Model size
7.24B params
Tensor type
FP16
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for mlabonne/UltraMerge-7B

Finetunes
6 models
Merges
2 models
Quantizations
6 models

Spaces using mlabonne/UltraMerge-7B 6