PAINTED FANTASY VISAGE v2

Mistrall Small 3.2 Upscaled 33B

image/png

Overview

A surprisingly difficult model to work with. Removing the repetition was coming at the expense of the unique creativity the original upscale had.

Decided on upscaling Painted Fantasy v2, healing it and then merging the original upscale back in.

The result is a smarter, uncensored, creative model that excels at character driven RP / ERP where characters are portrayed creatively and proactively.

SillyTavern Settings

Recommended Roleplay Format

> Actions: In plaintext
> Dialogue: "In quotes"
> Thoughts: *In asterisks*

Recommended Samplers

> Temp: 0.6
> MinP: 0.05 - 0.1
> TopP: 0.9 - 1.0
> Dry: 0.8, 1.75, 4

Instruct

Mistral v7 Tekken

Quantizations

EXL3

> 3bpw
> 4bpw
> 5bpw
> 6bpw

Creation Process

Creation Process: Upscale > PT > SFT > KTO > DPO

Pretrained on approx 300MB of light novels, SFW / NSFW stories and FineWeb-2 corpus.

SFT on approx 8 million tokens, SFW / NSFW RP, stories and creative instruct data.

KTO on antirep data created from the SFT datasets. Rejected examples generated by MS3.2 with repetition_penalty=0.9 and OOC commands encouraging it to misgender, impersonate user etc.

DPO on a high quality RP / NSFW dataset that is unreleased using rejected samples created in the same method as KTO.

Resulting model was non repetitive, but had lost some of the spark the original upscale had. Merged the original upscale back in, making sure to not reintroduce repetition.

Downloads last month
170
Safetensors
Model size
33.6B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for zerofata/MS3.2-PaintedFantasy-Visage-v2-33B

Datasets used to train zerofata/MS3.2-PaintedFantasy-Visage-v2-33B

Collection including zerofata/MS3.2-PaintedFantasy-Visage-v2-33B