Prompting

https://rentry.org/tsukasa13b - reccomended prompts and gen settings

The current model version has been trained on prompts using three different roles, which are denoted by the following tokens: <|system|>, <|user|> and <|model|>.

The <|system|> prompt can be used to inject out-of-channel information behind the scenes, while the <|user|> prompt should be used to indicate user input. The <|model|> token should then be used to indicate that the model should generate a response. These tokens can happen multiple times and be chained up to form a conversation history.

Training

base model (llama-2-13b-hf)

tuned on koishi dataset (commit c83d922) for 1 epoch

then tuned on pippa dataset (commit 6412b0c) for 1 epoch

then tuned on geepeetee4 dataset (commit c83d922) for 1 epoch

then tuned on limarp (without ponyville, lolicit, and all the fallen subsets. Version 2023-09-14) for 2 epochs

Downloads last month
21
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Datasets used to train ludis/tsukasa-13b-qlora-limarp

Collection including ludis/tsukasa-13b-qlora-limarp