Text Generation
Transformers
Inference Endpoints
machineteacher commited on
Commit
e8de855
1 Parent(s): 0c4a237

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +11 -1
README.md CHANGED
@@ -93,6 +93,16 @@ Note that the tokens and the task description need not be in the language of the
93
 
94
  ### Run the model
95
 
 
 
 
 
 
 
 
 
 
 
96
  ```python
97
  from transformers import AutoTokenizer, AutoModelForCausalLM
98
 
@@ -108,7 +118,7 @@ inputs = tokenizer(prompt, return_tensors='pt')
108
 
109
  outputs = model.generate(**inputs, max_new_tokens=20)
110
 
111
- print(tokenizer.decode(outputs[0], skip_special_tokens=True)
112
 
113
  # --> I have a small cat ,
114
 
 
93
 
94
  ### Run the model
95
 
96
+ **Make sure you have the following libraries installed:**
97
+ ```
98
+ - peft
99
+ - protobuf
100
+ - sentencepiece
101
+ - tokenizers
102
+ - torch
103
+ - transformers
104
+ ```
105
+
106
  ```python
107
  from transformers import AutoTokenizer, AutoModelForCausalLM
108
 
 
118
 
119
  outputs = model.generate(**inputs, max_new_tokens=20)
120
 
121
+ print(tokenizer.decode(outputs[0], skip_special_tokens=True))
122
 
123
  # --> I have a small cat ,
124