Edit model card

Iambe-RP-cDPO-20b-ALT

Trained with Alpaca prompt formatting, some like ChatML

Description

Named after a charming daughter of Echo and Pan in Greek myth, Iambe-RP-ALT failed training on the v2 dataset at 0.6 epochs. Therefore, I don't feel comfortable considering this a full new generation.

Iambe is intended to have the best realistically possible understanding of instructions, anatomy and scene state for a 20b merge, while remaining passionate and humanoid in "voice".

Update Methodology

Take a look at the dataset v2 Iambe and I created together for more info. The cDPO training was done directly on Iambe-Storyteller-20b, and I The notebook used is also available in the dataset's repo.

Assistant Example @ q5_k_m

NSFW Writing Example @ q5_k_m

Silly but Impressive RP Example @ q5_k_m

Downloads last month
7
Safetensors
Model size
20B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.