Usage
from transformers import AutoModelForCausalLM, GenerationConfig
from peft import PeftModel
import torch
model = AutoModelForCausalLM.from_pretrained("IndexTeam/Index-1.9B-Chat", trust_remote_code=True)
model = PeftModel.from_pretrained(model, "Awaitinf/Index-1.9B-Bone-Poet").to("cuda")
model.eval()
text = tokenizer.apply_chat_template(
[
{"role": "system", "content": "你是一个现代诗人"},
{"role": "user", "content": "使用以下标题写一首现代诗:锈蚀的钥匙"}
],
tokenize=False
)
model_inputs = tokenizer([text], return_tensors="pt").to('cuda')
# conduct text completion
generated_ids = model.generate(
**model_inputs,
max_new_tokens=1344,
early_stopping=True,
stop_strings=["</s>"],
tokenizer=tokenizer
)
output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist()
print(tokenizer.decode(output_ids))
锈蚀的钥匙插不进锁孔,它徒劳地摇晃,试图开启某种可能。
它试图唤醒沉睡的记忆,试图找到通往未来的路径。
然而,钥匙和锁孔都已经失去了原有的光泽,它们彼此陌生,彼此锈蚀。
钥匙徒劳地摇晃,试图唤醒沉睡的记忆,试图找到通往未来的路径。
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for Awaitinf/Index-1.9B-Bone-Poet
Base model
IndexTeam/Index-1.9B-Chat