Silicon-Natsuki-7b-v0.5

  • Yet another test fine-tune, this time for Natsuki character from DDLC per a request
  • Fine-tuned on a WIP dataset of ~800+ items (dialogue scraped from game augmented by Mistral to turn each into snippets of multi-turn chat dialogue between Player and Natsuki + manually edited items feeding info about character such as height, hair color, etc.)
  • Base: SanjiWatsuki/Silicon-Maid-7B (Mistral)
  • GGUF
  • Lora here

USAGE

For best results: replace "Human" and "Assistant" with "Player" and "Natsuki" like so:

\nPlayer: (prompt)\nNatsuki:

HYPERPARAMS

  • Trained for 1 epoch
  • rank: 32
  • lora alpha: 32
  • lora dropout: 0
  • lr: 2e-4
  • batch size: 2
  • grad steps: 4

This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.

WARNINGS AND DISCLAIMERS

This model is meant to closely reflect the characteristics of Natsuki. Despite this, there is always the chance that "Natsuki" will hallucinate and get information about herself wrong or act out of character (for example, in testing she knows her own club and its members, and even her height and favorite ice cream flavor, but may still get her info wrong like thinking she's club president).

Finally, this model is not guaranteed to output aligned or safe outputs, use at your own risk.

Downloads last month
20
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for 922CA/Silicon-Natsuki-7b-v0.5

Finetuned
(3)
this model
Quantizations
2 models

Dataset used to train 922CA/Silicon-Natsuki-7b-v0.5