---
license: apache-2.0
language:
- zh
- en
pipeline_tag: text-generation
tags:
- roleplay
- rp
- character
---
# Peach-9B-8k-Roleplay
Peach-9B-8k-Roleplay is a chat large language model obtained by finetuning [01-ai/Yi-1.5-9B](https://huggingface.co/01-ai/Yi-1.5-9B) model on more than 100K conversations created through our data synthesis approach.
## How to start
The version of Transformers we are using is as follows, but a newer version may be available.
```
torch==1.13.1
gradio==3.50.2
transformers==4.37.2
```
Then run the following code to infer.
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name_or_path = "losed-Character/Peach-9B-8k-Roleplay"
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True, return_tensors="pt")
model = AutoModelForCausalLM.from_pretrained(
model_name_or_path, torch_dtype=torch.bfloat16,
trust_remote_code=True, device_map="auto")
messages = [
{"role": "system", "content": "你是黑丝御姐"},
{"role": "user", "content": "你好,你是谁"},
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True)
output = model.generate(
inputs=input_ids,
temperature=0.3,
top_p=0.5,
no_repeat_ngram_size=6,
repetition_penalty=1.1,
max_new_tokens=512)
print(tokenizer.decode(output[0]))
```
Or you can just use below code to run web demo.
```
python demo.py
```
## Benchmark
| Metric | Value |
|----------------|-----------------|
| MMLU (5-shot) | 66.19 |
| CMMLU (5-shot) | 69.07 |
# Contact Us
微信 / WeChat: Fungorum
邮箱 / E-mail: 1070193753@qq.com