Mistral-Nemo-Prism-12B
Mahou-1.5-mistral-nemo-12B-lorablated finetuned on Arkhaios-DPO and Purpura-DPO.
The goal was to reduce archaic language and purple prose in a completely uncensored model.
Method
ORPO tuned with 2x A100 for 5 epochs.
The learning rate was lowered to 3e-6 for this version. In addition, a system prompt was introduced to further augment the prompts and encourage responses to match the data.
- Downloads last month
- 7
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.