Mistral-Nemo-Prism-12B-v7
Mahou-1.5-mistral-nemo-12B-lorablated finetuned on Arkhaios-DPO and Purpura-DPO.
The goal was to reduce archaic language and purple prose in a completely uncensored model.
Method
ORPO tuned with 8x A40 for 10 epochs.
For this version, beta was increased to 2.
In conclusion, LoRA does not seem to be able to completely remove some of the language issues deeply embedded in the model.
- Downloads last month
- 17
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.