Text Generation
Transformers
PyTorch
Chinese
English
llama
text-generation-inference
fireballoon commited on
Commit
66f945e
1 Parent(s): ddc2956

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -38,8 +38,8 @@ python3 -m fastchat.serve.cli --model-path fireballoon/baichuan-vicuna-chinese-7
38
  Inference with Transformers:
39
  ```ipython
40
  >>> from transformers import AutoTokenizer, AutoModelForCausalLM, TextStreamer
41
- >>> tokenizer = AutoTokenizer.from_pretrained("fireballoon/baichuan-vicuna-7b", use_fast=False)
42
- >>> model = AutoModelForCausalLM.from_pretrained("fireballoon/baichuan-vicuna-7b").half().cuda()
43
  >>> streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
44
  >>> instruction = "A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: {} ASSISTANT:"
45
  >>> prompt = instruction.format("How can I improve my time management skills?") # user message
 
38
  Inference with Transformers:
39
  ```ipython
40
  >>> from transformers import AutoTokenizer, AutoModelForCausalLM, TextStreamer
41
+ >>> tokenizer = AutoTokenizer.from_pretrained("fireballoon/baichuan-vicuna-chinese-7b", use_fast=False)
42
+ >>> model = AutoModelForCausalLM.from_pretrained("fireballoon/baichuan-vicuna-chinese-7b").half().cuda()
43
  >>> streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
44
  >>> instruction = "A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: {} ASSISTANT:"
45
  >>> prompt = instruction.format("How can I improve my time management skills?") # user message