Usage:
from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
import torch
tokenizer = AutoTokenizer.from_pretrained("baichuan7b-lora-merged", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("baichuan7b-lora-merged", device_map="auto", trust_remote_code=True,torch_dtype=torch.float16)
streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
while True:
query = input('请输入问题:')
if len(query.strip()) == 0:
break
inputs = tokenizer(["<human>:{}\n<bot>:".format(query)], return_tensors="pt")
inputs = inputs.to("cuda")
generate_ids = model.generate(**inputs, max_new_tokens=256, streamer=streamer)
- Downloads last month
- 13
Inference API (serverless) does not yet support model repos that contain custom code.