Prompting

https://rentry.org/v43eo - reccomended prompts and gen settings

The current model version has been trained on prompts using three different roles, which are denoted by the following tokens: <|system|>, <|user|> and <|model|>.

The <|system|> prompt can be used to inject out-of-channel information behind the scenes, while the <|user|> prompt should be used to indicate user input. The <|model|> token should then be used to indicate that the model should generate a response. These tokens can happen multiple times and be chained up to form a conversation history.

Training

base model (llama-2-7b-hf)

tuned on commit de693ac of the koishi dataset for 1 epoch as apart of ludis/tsukasa-7b

then tuned on commit 36fc235 of pippa metharme for 1 epoch as apart of ludis/tsukasa-7b

then tuned on Version 2023-09-03 of LimaRP (without ponyville, lolicit, all the fallen, and eka's portal subsets) for 2 epochs

Downloads last month
12
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Datasets used to train ludis/tsukasa-limarp-7b

Collection including ludis/tsukasa-limarp-7b