--- library_name: transformers license: apache-2.0 datasets: - zerofata/Roleplay-Anime-Characters - zerofata/Instruct-Anime-CreativeWriting - zerofata/Summaries-Anime-FandomPages base_model: - mistralai/Mistral-Small-3.2-24B-Instruct-2506 --- Painted Fantasy

PAINTED FANTASY

Mistral Small 3.2 24B
![image/png](https://cdn-uploads.huggingface.co/production/uploads/65b19c6c638328850e12d38c/YkQOImbH2NJ-Lgd_q6ail.png)

Overview

Experimental release.

This is an uncensored creative model intended to excel at character driven RP / ERP.

This model is designed to provide longer, narrative heavy responses where characters are portrayed accurately and proactively.

SillyTavern Settings

Recommended Roleplay Format

> Actions: In plaintext
> Dialogue: "In quotes"
> Thoughts: *In asterisks*

Recommended Samplers

> Temp: 0.8
> MinP: 0.04 - 0.05
> TopP: 0.95 - 1.0
> Dry: 0.8, 1.75, 4

Instruct

Mistral v7 Tekken

Quantizations

Training Process

Training process: Pretrain > SFT > DPO > DPO 2

Did a small pretrain on some light novels and Frieren wiki data as a test. Hasn't seemed to hurt the model and model has shown some small improvements in the lore of series that were included.

The model then went through the standard SFT using a dataset of approx 3.6 million tokens, 700 RP conversations, 1000 creative writing / instruct samples and about 100 summaries. The bulk of this data has been made public.

Finally DPO was used to make the model a little more consistent. The first stage of DPO focused on instruction following and the second tried to burn out some Mistral-isms.