Rocketknight1 HF staff commited on
Commit
307843d
1 Parent(s): fba8ad6

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +44 -0
README.md CHANGED
@@ -178,6 +178,50 @@ and we get:
178
  The current temperature in Paris, France is 22.0 degrees Celsius.<|im_end|>
179
  ```
180
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
181
  ## Prompt Format for JSON Mode / Structured Outputs
182
 
183
  Our model was also trained on a specific system prompt for Structured Outputs, which should respond with **only** a json object response, in a specific json schema.
 
178
  The current temperature in Paris, France is 22.0 degrees Celsius.<|im_end|>
179
  ```
180
 
181
+ ## Chat Templates for function calling
182
+
183
+ You can also use chat templates for function calling. For more information, please see the relevant section of the [chat template documentation](https://huggingface.co/docs/transformers/en/chat_templating#advanced-tool-use--function-calling).
184
+
185
+ Here is a brief example of this approach:
186
+
187
+ ```python
188
+ def multiply(a: int, b: int):
189
+ """
190
+ A function that multiplies two numbers
191
+
192
+ Args:
193
+ a: The first number to multiply
194
+ b: The second number to multiply
195
+ """
196
+ return int(a) * int(b)
197
+
198
+ tools = [multiply] # Only one tool in this example, but you probably want multiple!
199
+
200
+ model_input = tokenizer.apply_chat_template(
201
+ messages,
202
+ tools=tools
203
+ )
204
+ ```
205
+
206
+ The docstrings and type hints of the functions will be used to generate a function schema that will be read by the chat template and passed to the model.
207
+ Please make sure you include a docstring in the same format as this example!
208
+
209
+ If the model makes a tool call, you can append the tool call to the conversation like so:
210
+
211
+ ```python
212
+ tool_call = {"name": "multiply", "arguments": {"a": "6", "b": "7"}}
213
+ messages.append({"role": "assistant", "tool_calls": [{type": "function", "function": tool_call}]})
214
+ ```
215
+
216
+ Next, call the tool function and append the tool result:
217
+
218
+ ```python
219
+ messages.append({"role": "tool", "name": "multiply", "content": "42"})
220
+ ```
221
+
222
+ And finally apply the chat template to the updated `messages` list and `generate()` text once again to continue the conversation.
223
+
224
+
225
  ## Prompt Format for JSON Mode / Structured Outputs
226
 
227
  Our model was also trained on a specific system prompt for Structured Outputs, which should respond with **only** a json object response, in a specific json schema.