Update README.md
Browse filesFixed sample code in the model card.
- chat template
- missing import
README.md
CHANGED
|
@@ -118,7 +118,7 @@ this model is publicly available (entirely on Hugging Face), and scripts provide
|
|
| 118 |
Load the model like this:
|
| 119 |
```python
|
| 120 |
import torch
|
| 121 |
-
from transformers import AutoModelForCausalLM, AutoTokenizer
|
| 122 |
|
| 123 |
model = AutoModelForCausalLM.from_pretrained("tomg-group-umd/huginn-0125", torch_dtype=torch.bfloat16, trust_remote_code=True)
|
| 124 |
tokenizer = AutoTokenizer.from_pretrained("tomg-group-umd/huginn-0125")
|
|
@@ -164,8 +164,8 @@ outputs = model.generate(input_ids, config, tokenizer=tokenizer, num_steps=16)
|
|
| 164 |
The model was not finetuned or post-trained, but due to inclusion of instruction data during pretraining, natively understand its chat template. You can chat with the model like so
|
| 165 |
```
|
| 166 |
messages = []
|
| 167 |
-
messages.append({"role": "system", "content" : You are a helpful assistant."}
|
| 168 |
-
messages.append({"role": "user", "content" : What do you think of Goethe's Faust?"}
|
| 169 |
chat_input = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
|
| 170 |
print(chat_input)
|
| 171 |
input_ids = tokenizer.encode(chat_input, return_tensors="pt", add_special_tokens=False).to(device)
|
|
|
|
| 118 |
Load the model like this:
|
| 119 |
```python
|
| 120 |
import torch
|
| 121 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer, GenerationConfig
|
| 122 |
|
| 123 |
model = AutoModelForCausalLM.from_pretrained("tomg-group-umd/huginn-0125", torch_dtype=torch.bfloat16, trust_remote_code=True)
|
| 124 |
tokenizer = AutoTokenizer.from_pretrained("tomg-group-umd/huginn-0125")
|
|
|
|
| 164 |
The model was not finetuned or post-trained, but due to inclusion of instruction data during pretraining, natively understand its chat template. You can chat with the model like so
|
| 165 |
```
|
| 166 |
messages = []
|
| 167 |
+
messages.append({"role": "system", "content" : "You are a helpful assistant."})
|
| 168 |
+
messages.append({"role": "user", "content" : "What do you think of Goethe's Faust?"})
|
| 169 |
chat_input = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
|
| 170 |
print(chat_input)
|
| 171 |
input_ids = tokenizer.encode(chat_input, return_tensors="pt", add_special_tokens=False).to(device)
|