Edit model card

MiquMaid v2 DPO

Check out our blogpost about this model series Here! - Join our Discord server Here!

[V2-70B - V2-70B-DPO - V2-2x70B - V2-2x70B-DPO]

This model uses the Alpaca prompting format

Model trained for RP conversation on Miqu-70B with our magic sauce, then trained on DPO for uncensoring.

Credits:

  • Undi
  • IkariDev

Description

This repo contains FP16 files of MiquMaid-v2-70B-DPO.

Switch: FP16 - GGUF

Training data used:

DPO training data used:

Custom format:

### Instruction:
{system prompt}

### Input:
{input}

### Response:
{reply}

Others

Undi: If you want to support us, you can here.

IkariDev: Visit my retro/neocities style website please kek

Downloads last month
250
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for NeverSleep/MiquMaid-v2-70B-DPO

Merges
1 model
Quantizations
2 models

Collection including NeverSleep/MiquMaid-v2-70B-DPO