DPO from the model ranked 6th on the overall leaderboard and 1st in the 7B leaderboard - v1olet/v1olet_marcoroni-go-bruins-merge-7B.

You can use alpaca template.

template_format = """{system}
### Instruction:
{prompt}

### Response:
"""

Developed by: Trong-Hieu Nguyen-Mau

Downloads last month
1,093
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for v1olet/v1olet_merged_dpo_7B

Quantizations
2 models

Spaces using v1olet/v1olet_merged_dpo_7B 18