Edit model card

RWKV v4 7B world model

finetuned with ultrachat , COT and some novel instructions data, commitpackft and so on

use full ultrachat and cot data, about 3B tokens

if you wanna do Role play, use this model

Contributor

@JL-er @Remixa

Design of experiment

this model lose multi-turn chat ability,cause from using whole ultrachat datasets.

so i continue tuned multi-turn datasets with 2 aspects

1.role play

2.novel multiturn instruction

Training details

wandb.ai

CAses

image/jpeg

image/jpeg

Usage

adjust tempp and topp on different scenario. image/png

image/png

image/png

COT and lookback

image/png this model can do above task with 100% acc.

image/png

role play model

image/png

image/png

novel

image/png

image/png

demo site(temporary)

online showcase

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference API
Unable to determine this model's library. Check the docs .

Datasets used to train xiaol/RWKV-4-world-one-state-ultrachat-COT-65k

Collection including xiaol/RWKV-4-world-one-state-ultrachat-COT-65k