Spaces:
Running
Running
update
Browse files- doc/chat-template/DeepSeek-R1-0528/chat_template.jinja +14 -0
- doc/chat-template/DeepSeek-R1-0528/deepseek.md +151 -0
- doc/chat-template/DeepSeek-R1-0528/demo.py +75 -0
- doc/chat-template/Hermes-3-Llama-3.1-405B/README.md +64 -0
- doc/chat-template/Hermes-3-Llama-3.1-405B/chat_template.default.jinja +6 -0
- doc/chat-template/Hermes-3-Llama-3.1-405B/chat_template.tool_use.jinja +152 -0
- doc/chat-template/Llama-3.1-405B-Instruct/README.md +0 -23
- doc/chat-template/Llama-3.1-405B-Instruct/{chat_template.md → chat_template.jinja} +0 -6
- doc/chat-template/Llama-3.1-405B-Instruct/demo.py +0 -20
- doc/chat-template/Llama-3.1-405B-Instruct/generate.py +0 -24
- doc/chat-template/export_chat_template.py +35 -0
- doc/chat-template/tool_demo.py +8 -5
- doc/chat-template/tools_and_llm_response.md +84 -8
doc/chat-template/DeepSeek-R1-0528/chat_template.jinja
ADDED
@@ -0,0 +1,14 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{% if not add_generation_prompt is defined %}{% set add_generation_prompt = false %}{% endif %}{% set ns = namespace(is_first=false, is_tool=false, is_output_first=true, system_prompt='', is_first_sp=true, is_last_user=false) %}{%- for message in messages %}{%- if message['role'] == 'system' %}{%- if ns.is_first_sp %}{% set ns.system_prompt = ns.system_prompt + message['content'] %}{% set ns.is_first_sp = false %}{%- else %}{% set ns.system_prompt = ns.system_prompt + '
|
2 |
+
|
3 |
+
' + message['content'] %}{%- endif %}{%- endif %}{%- endfor %}{{ bos_token }}{{ ns.system_prompt }}{%- for message in messages %}{% set content = message['content'] %}{%- if message['role'] == 'user' %}{%- set ns.is_tool = false -%}{%- set ns.is_first = false -%}{%- set ns.is_last_user = true -%}{{'<|User|>' + content + '<|Assistant|>'}}{%- endif %}{%- if message['role'] == 'assistant' %}{% if '</think>' in content %}{% set content = content.split('</think>')[-1] %}{% endif %}{% endif %}{%- if message['role'] == 'assistant' and message['tool_calls'] is defined and message['tool_calls'] is not none %}{%- set ns.is_last_user = false -%}{%- if ns.is_tool %}{{'<|tool▁outputs▁end|>'}}{%- endif %}{%- set ns.is_first = false %}{%- set ns.is_tool = false -%}{%- set ns.is_output_first = true %}{%- for tool in message['tool_calls'] %}{%- if not ns.is_first %}{%- if content is none %}{{'<|tool▁calls▁begin|><|tool▁call▁begin|>' + tool['type'] + '<|tool▁sep|>' + tool['function']['name'] + '
|
4 |
+
' + '```json' + '
|
5 |
+
' + tool['function']['arguments'] + '
|
6 |
+
' + '```' + '<|tool▁call▁end|>'}}{%- else %}{{content + '<|tool▁calls▁begin|><|tool▁call▁begin|>' + tool['type'] + '<|tool▁sep|>' + tool['function']['name'] + '
|
7 |
+
' + '```json' + '
|
8 |
+
' + tool['function']['arguments'] + '
|
9 |
+
' + '```' + '<|tool▁call▁end|>'}}{%- endif %}{%- set ns.is_first = true -%}{%- else %}{{'
|
10 |
+
' + '<|tool▁call▁begin|>' + tool['type'] + '<|tool▁sep|>' + tool['function']['name'] + '
|
11 |
+
' + '```json' + '
|
12 |
+
' + tool['function']['arguments'] + '
|
13 |
+
' + '```' + '<|tool▁call▁end|>'}}{%- endif %}{%- endfor %}{{'<|tool▁calls▁end|><|end▁of▁sentence|>'}}{%- endif %}{%- if message['role'] == 'assistant' and (message['tool_calls'] is not defined or message['tool_calls'] is none)%}{%- set ns.is_last_user = false -%}{%- if ns.is_tool %}{{'<|tool▁outputs▁end|>' + content + '<|end▁of▁sentence|>'}}{%- set ns.is_tool = false -%}{%- else %}{{content + '<|end▁of▁sentence|>'}}{%- endif %}{%- endif %}{%- if message['role'] == 'tool' %}{%- set ns.is_last_user = false -%}{%- set ns.is_tool = true -%}{%- if ns.is_output_first %}{{'<|tool▁outputs▁begin|><|tool▁output▁begin|>' + content + '<|tool▁output▁end|>'}}{%- set ns.is_output_first = false %}{%- else %}{{'
|
14 |
+
<|tool▁output▁begin|>' + content + '<|tool▁output▁end|>'}}{%- endif %}{%- endif %}{%- endfor -%}{% if ns.is_tool %}{{'<|tool▁outputs▁end|>'}}{% endif %}{% if add_generation_prompt and not ns.is_last_user and not ns.is_tool %}{{'<|Assistant|>'}}{% endif %}
|
doc/chat-template/DeepSeek-R1-0528/deepseek.md
ADDED
@@ -0,0 +1,151 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
## R1-template
|
4 |
+
|
5 |
+
```sh
|
6 |
+
{% if not add_generation_prompt is defined %}{% set add_generation_prompt = false %}{% endif %}{% set ns = namespace(is_first=false, is_tool=false, is_output_first=true, system_prompt='', is_first_sp=true) %}{%- for message in messages %}{%- if message['role'] == 'system' %}{%- if ns.is_first_sp %}{% set ns.system_prompt = ns.system_prompt + message['content'] %}{% set ns.is_first_sp = false %}{%- else %}{% set ns.system_prompt = ns.system_prompt + '\n\n' + message['content'] %}{%- endif %}{%- endif %}{%- endfor %}{{ bos_token }}{{ ns.system_prompt }}{%- for message in messages %}{%- if message['role'] == 'user' %}{%- set ns.is_tool = false -%}{{'<|User|>' + message['content']}}{%- endif %}{%- if message['role'] == 'assistant' and 'tool_calls' in message %}{%- set ns.is_tool = false -%}{%- for tool in message['tool_calls'] %}{%- if not ns.is_first %}{%- if message['content'] is none %}{{'<|Assistant|><|tool▁calls▁begin|><|tool▁call▁begin|>' + tool['type'] + '<|tool▁sep|>' + tool['function']['name'] + '\n' + '```json' + '\n' + tool['function']['arguments'] + '\n' + '```' + '<|tool▁call▁end|>'}}{%- else %}{{'<|Assistant|>' + message['content'] + '<|tool▁calls▁begin|><|tool▁call▁begin|>' + tool['type'] + '<|tool▁sep|>' + tool['function']['name'] + '\n' + '```json' + '\n' + tool['function']['arguments'] + '\n' + '```' + '<tool▁call▁end|>'}}{%- endif %}{%- set ns.is_first = true -%}{%- else %}{{'\n' + '<|tool▁call▁begin|>' + tool['type'] + '<|tool▁sep|>' + tool['function']['name'] + '\n' + '```json' + '\n' + tool['function']['arguments'] + '\n' + '```' + '<|tool▁call▁end|>'}}{%- endif %}{%- endfor %}{{'<|tool▁calls▁end|><|end▁of▁sentence|>'}}{%- endif %}{%- if message['role'] == 'assistant' and 'tool_calls' not in message %}{%- if ns.is_tool %}{{'<|tool▁outputs▁end|>' + message['content'] + '<|end▁of▁sentence|>'}}{%- set ns.is_tool = false -%}{%- else %}{% set content = message['content'] %}{% if '</think>' in content %}{% set content = content.split('</think>')[-1] %}{% endif %}{{'<|Assistant|>' + content + '<|end▁of▁sentence|>'}}{%- endif %}{%- endif %}{%- if message['role'] == 'tool' %}{%- set ns.is_tool = true -%}{%- if ns.is_output_first %}{{'<|tool▁outputs▁begin|><|tool▁output▁begin|>' + message['content'] + '<|tool▁output▁end|>'}}{%- set ns.is_output_first = false %}{%- else %}{{'<|tool▁output▁begin|>' + message['content'] + '<|tool▁output▁end|>'}}{%- endif %}{%- endif %}{%- endfor -%}{% if ns.is_tool %}{{'<|tool▁outputs▁end|>'}}{% endif %}{% if add_generation_prompt and not ns.is_tool %}{{'<|Assistant|>'}}{% endif %}
|
7 |
+
```
|
8 |
+
|
9 |
+
FAQ:
|
10 |
+
- 问: deepseek 的 system 为什么没有 eos_token?
|
11 |
+
- 答: 因为 deepseek 没有单独的 system special token,而是仅仅讲system内容拼到了user前而已。 `{{ bos_token }}{{ ns.system_prompt }} {{'<|User|>' + message['content']}}`
|
12 |
+
|
13 |
+
- 问: 是否推荐用system
|
14 |
+
- 答: https://github.com/deepseek-ai/DeepSeek-R1/blob/main/README.md 这里不推荐用system
|
15 |
+
|
16 |
+
- 问: 如果不用system,system的内容放在哪呢?
|
17 |
+
- 答: ss
|
18 |
+
|
19 |
+
- 问: vllm 的推理,为什么给system之后添加了 eos_token?
|
20 |
+
- 答: 不知道
|
21 |
+
|
22 |
+
- 问: 为什么是 end▁of▁sentence 不是 end_of_sentence?
|
23 |
+
- 答:
|
24 |
+
|
25 |
+
|
26 |
+
官方更新
|
27 |
+
- 增加 `<|Assistant|><think>\n`,强制输出 `<think>`
|
28 |
+
- https://huggingface.co/deepseek-ai/DeepSeek-R1/commit/8a58a132790c9935686eb97f042afa8013451c9f
|
29 |
+
- `"add_bos_token": false`, 改成了 `true`,
|
30 |
+
- https://huggingface.co/deepseek-ai/DeepSeek-R1/commit/cb48aa8cb28c160ec8d853707278e0402c9ad01a
|
31 |
+
|
32 |
+
```sh
|
33 |
+
{% if not add_generation_prompt is defined %}
|
34 |
+
{% set add_generation_prompt = false %}
|
35 |
+
{% endif %}
|
36 |
+
{% set ns = namespace(is_first=false, is_tool=false, is_output_first=true, system_prompt='', is_first_sp=true) %}
|
37 |
+
{%- for message in messages %}
|
38 |
+
{%- if message['role'] == 'system' %}
|
39 |
+
{%- if ns.is_first_sp %}
|
40 |
+
{% set ns.system_prompt = ns.system_prompt + message['content'] %}
|
41 |
+
{% set ns.is_first_sp = false %}
|
42 |
+
{%- else %}
|
43 |
+
{% set ns.system_prompt = ns.system_prompt + '\n\n' + message['content'] %}
|
44 |
+
{%- endif %}
|
45 |
+
{%- endif %}
|
46 |
+
{%- endfor %}
|
47 |
+
{{ bos_token }}{{ ns.system_prompt }}
|
48 |
+
{%- for message in messages %}
|
49 |
+
{%- if message['role'] == 'user' %}
|
50 |
+
{%- set ns.is_tool = false -%}
|
51 |
+
{{'<|User|>' + message['content']}}
|
52 |
+
{%- endif %}
|
53 |
+
{%- if message['role'] == 'assistant' and 'tool_calls' in message %}
|
54 |
+
{%- set ns.is_tool = false -%}
|
55 |
+
{%- for tool in message['tool_calls'] %}
|
56 |
+
{%- if not ns.is_first %}
|
57 |
+
{%- if message['content'] is none %}
|
58 |
+
{{'<|Assistant|><|tool▁calls▁begin|><|tool▁call▁begin|>' + tool['type'] + '<|tool▁sep|>' + tool['function']['name'] + '\n' + '```json' + '\n' + tool['function']['arguments'] + '\n' + '```' + '<|tool▁call▁end|>'}}
|
59 |
+
{%- else %}
|
60 |
+
{{'<|Assistant|>' + message['content'] + '<|tool▁calls▁begin|><|tool▁call▁begin|>' + tool['type'] + '<|tool▁sep|>' + tool['function']['name'] + '\n' + '```json' + '\n' + tool['function']['arguments'] + '\n' + '```' + '<tool▁call▁end|>'}}
|
61 |
+
{%- endif %}
|
62 |
+
{%- set ns.is_first = true -%}
|
63 |
+
{%- else %}
|
64 |
+
{{'\n' + '<|tool▁call▁begin|>' + tool['type'] + '<|tool▁sep|>' + tool['function']['name'] + '\n' + '```json' + '\n' + tool['function']['arguments'] + '\n' + '```' + '<|tool▁call▁end|>'}}
|
65 |
+
{%- endif %}
|
66 |
+
{%- endfor %}
|
67 |
+
{{'<|tool▁calls▁end|><|end▁of▁sentence|>'}}
|
68 |
+
{%- endif %}
|
69 |
+
{%- if message['role'] == 'assistant' and 'tool_calls' not in message %}
|
70 |
+
{%- if ns.is_tool %}
|
71 |
+
{{'<|tool▁outputs▁end|>' + message['content'] + '<|end▁of▁sentence|>'}}
|
72 |
+
{%- set ns.is_tool = false -%}
|
73 |
+
{%- else %}
|
74 |
+
{% set content = message['content'] %}
|
75 |
+
{% if '</think>' in content %}
|
76 |
+
{% set content = content.split('</think>')[-1] %}{% endif %}{{'<|Assistant|>' + content + '<|end▁of▁sentence|>'}}
|
77 |
+
{%- endif %}
|
78 |
+
{%- endif %}
|
79 |
+
{%- if message['role'] == 'tool' %}
|
80 |
+
{%- set ns.is_tool = true -%}
|
81 |
+
{%- if ns.is_output_first %}
|
82 |
+
{{'<|tool▁outputs▁begin|><|tool▁output▁begin|>' + message['content'] + '<|tool▁output▁end|>'}}
|
83 |
+
{%- set ns.is_output_first = false %}
|
84 |
+
{%- else %}
|
85 |
+
{{'<|tool▁output▁begin|>' + message['content'] + '<|tool▁output▁end|>'}}
|
86 |
+
{%- endif %}
|
87 |
+
{%- endif %}
|
88 |
+
{%- endfor -%}
|
89 |
+
{% if ns.is_tool %}
|
90 |
+
{{'<|tool▁outputs▁end|>'}}
|
91 |
+
{% endif %}
|
92 |
+
{% if add_generation_prompt and not ns.is_tool %}
|
93 |
+
{{'<|Assistant|>'}}
|
94 |
+
{% endif %}
|
95 |
+
```
|
96 |
+
|
97 |
+
|
98 |
+
```sh
|
99 |
+
## DeepSeek-R1-Distill-Qwen-1.5B
|
100 |
+
system_prompt='') %}{%- for message in messages %}{%- if message['role'] == 'system' %}{% set ns.system_prompt = message['content'] %}{%- endif %}{%- endfor %}{{bos_token}}{{ns.system_prompt}}{%- for message in messages %}{%- if message['role'] == 'user' %}{%- set ns.is_tool = false -%}{{'<|User|>' + message['content']}}{%- endif |~
|
101 |
+
%}{%- if message['role'] == 'assistant' and message['content'] is none %}{%- set ns.is_tool = false -%}{%- for tool in message['too|~
|
102 |
+
l_calls']%}{%- if not ns.is_first %}{{'<|Assistant|><|tool▁calls▁begin|><|tool▁call▁begin|>' + tool['type'] + '<|tool▁sep|>|~
|
103 |
+
' + tool['function']['name'] + '\\n' + '```json' + '\\n' + tool['function']['arguments'] + '\\n' + '```' + '<|tool▁call▁end|>'}}{|~
|
104 |
+
%- set ns.is_first = true -%}{%- else %}{{'\\n' + '<|tool▁call▁begin|>' + tool['type'] + '<|tool▁sep|>' + tool['function']['nam|~
|
105 |
+
e'] + '\\n' + '```json' + '\\n' + tool['function']['arguments'] + '\\n' + '```' + '<|tool▁call▁end|>'}}{{'<|tool▁calls▁end|><||~
|
106 |
+
end▁of▁sentence|>'}}{%- endif %}{%- endfor %}{%- endif %}{%- if message['role'] == 'assistant' and message['content'] is not none |~
|
107 |
+
%}{%- if ns.is_tool %}{{'<|tool▁outputs▁end|>' + message['content'] + '<|end▁of▁sentence|>'}}{%- set ns.is_tool = false -%}{%- |~
|
108 |
+
else %}{% set content = message['content'] %}{% if '</think>' in content %}{% set content = content.split('</think>')[-1] %}{% endi|~
|
109 |
+
f %}{{'<|Assistant|>' + content + '<|end▁of▁sentence|>'}}{%- endif %}{%- endif %}{%- if message['role'] == 'tool' %}{%- set ns.|~
|
110 |
+
is_tool = true -%}{%- if ns.is_output_first %}{{'<|tool▁outputs▁begin|><|tool▁output▁begin|>' + message['content'] + '<|tool▁o|~
|
111 |
+
utput▁end|>'}}{%- set ns.is_output_first = false %}{%- else %}{{'\\n
|
112 |
+
|
113 |
+
## R1
|
114 |
+
system_prompt='', is_first_sp=true) %}{%- for message in messages %}{%- if message['role'] == 'system' %}{%- if ns.is_first_sp %}{% set ns.system_prompt = ns.system_prompt + message['content'] %}{% set ns.is_f|~
|
115 |
+
irst_sp = false %}{%- else %}{% set ns.system_prompt = ns.system_prompt + '\\n\\n' + message['content'] %}{%- endif %}{%- endif %}{%-|~
|
116 |
+
endfor %}{{ bos_token }}{{ ns.system_prompt }}{%- for message in messages %}{%- if message['role'] == 'user' %}{%- set ns.is_tool = |~
|
117 |
+
false -%}{{'<|User|>' + message['content']}}{%- endif %}{%- if message['role'] == 'assistant' and 'tool_calls' in message %}{%- set|~
|
118 |
+
ns.is_tool = false -%}{%- for tool in message['tool_calls'] %}{%- if not ns.is_first %}{%- if message['content'] is none %}{{'<|Ass|~
|
119 |
+
istant|><|tool▁calls▁begin|><|tool▁call▁begin|>' + tool['type'] + '<|tool▁sep|>' + tool['function']['name'] + '\\n' + '```json|~
|
120 |
+
' + '\\n' + tool['function']['arguments'] + '\\n' + '```' + '<|tool▁call▁end|>'}}{%- else %}{{'<|Assistant|>' + message['content'|~
|
121 |
+
] + '<|tool▁calls▁begin|><|tool▁call▁begin|>' + tool['type'] + '<|tool▁sep|>' + tool['function']['name'] + '\\n' + '```json' + |~
|
122 |
+
'\\n' + tool['function']['arguments'] + '\\n' + '```' + '<|tool▁call▁end|>'}}{%- endif %}{%- set ns.is_first = true -%}{%- else %}{|~
|
123 |
+
{'\\n' + '<|tool▁call▁begin|>' + tool['type'] + '<|tool▁sep|>' + tool['function']['name'] + '\\n' + '```json' + '\\n' + tool['fun|~
|
124 |
+
ction']['arguments'] + '\\n' + '```' + '<|tool▁call▁end|>'}}{%- endif %}{%- endfor %}{{'<|tool▁calls▁end|><|end▁of▁sentence|>'}|~
|
125 |
+
}{%- endif %}{%- if message['role'] == 'assistant' and 'tool_calls' not in message %}{%- if ns.is_tool %}{{'<|tool▁outputs▁end|>' +|~
|
126 |
+
message['content'] + '<|end▁of▁sentence|>'}}{%- set ns.is_tool = false -%}{%- else %}{% set content = message['content'] %}{% if '|~
|
127 |
+
</think>' in content %}{% set content = content.split('</think>')[-1] %}{% endif %}{{'<|Assistant|>' + content + '<|end▁of▁sentenc|~
|
128 |
+
e|>'}}{%- endif %}{%- endif %}{%- if message['role'] == 'tool' %}{%- set ns.is_tool = true -%}{%- if ns.is_output_first %}{{'<|tool|~
|
129 |
+
▁outputs▁begin|><|tool▁output▁begin|>' + message['content'] + '<|tool▁output▁end|>'}}{%- set ns.is_output_first = false %}{%- el|~
|
130 |
+
se %}{{'
|
131 |
+
|
132 |
+
```
|
133 |
+
|
134 |
+
|
135 |
+
## V3-template (没有think)
|
136 |
+
|
137 |
+
```sh
|
138 |
+
{% if not add_generation_prompt is defined %}{% set add_generation_prompt = false %}{% endif %}{% set ns = namespace(is_first=false, is_tool=false, is_output_first=true, system_prompt='', is_first_sp=true) %}{%- for message in messages %}{%- if message['role'] == 'system' %}{%- if ns.is_first_sp %}{% set ns.system_prompt = ns.system_prompt + message['content'] %}{% set ns.is_first_sp = false %}{%- else %}{% set ns.system_prompt = ns.system_prompt + '
|
139 |
+
|
140 |
+
' + message['content'] %}{%- endif %}{%- endif %}{%- endfor %}{{bos_token}}{{ns.system_prompt}}{%- for message in messages %}{%- if message['role'] == 'user' %}{%- set ns.is_tool = false -%}{{'<|User|>' + message['content']}}{%- endif %}{%- if message['role'] == 'assistant' and message['content'] is none %}{%- set ns.is_tool = false -%}{%- for tool in message['tool_calls']%}{%- if not ns.is_first %}{{'<|Assistant|><|tool▁calls▁begin|><|tool▁call▁begin|>' + tool['type'] + '<|tool▁sep|>' + tool['function']['name'] + '
|
141 |
+
' + '```json' + '
|
142 |
+
' + tool['function']['arguments'] + '
|
143 |
+
' + '```' + '<|tool▁call▁end|>'}}{%- set ns.is_first = true -%}{%- else %}{{'
|
144 |
+
' + '<|tool▁call▁begin|>' + tool['type'] + '<|tool▁sep|>' + tool['function']['name'] + '
|
145 |
+
' + '```json' + '
|
146 |
+
' + tool['function']['arguments'] + '
|
147 |
+
' + '```' + '<|tool▁call▁end|>'}}{{'<|tool▁calls▁end|><|end▁of▁sentence|>'}}{%- endif %}{%- endfor %}{%- endif %}{%- if message['role'] == 'assistant' and message['content'] is not none %}{%- if ns.is_tool %}{{'<|tool▁outputs▁end|>' + message['content'] + '<|end▁
|
148 |
+
of▁sentence|>'}}{%- set ns.is_tool = false -%}{%- else %}{{'<|Assistant|>' + message['content'] + '<|end▁of▁sentence|>'}}{%- endif %}{%- endif %}{%- if message['role'] == 'tool' %}{%- set ns.is_tool = true -%}{%- if ns.is_output_first %}{{'<|tool▁outputs▁begin|><|
|
149 |
+
tool▁output▁begin|>' + message['content'] + '<|tool▁output▁end|>'}}{%- set ns.is_output_first = false %}{%- else %}{{'
|
150 |
+
<|tool▁output▁begin|>' + message['content'] + '<|tool▁output▁end|>'}}{%- endif %}{%- endif %}{%- endfor -%}{% if ns.is_tool %}{{'<|tool▁outputs▁end|>'}}{% endif %}{% if add_generation_prompt and not ns.is_tool %}{{'<|Assistant|>'}}{% endif %}
|
151 |
+
```
|
doc/chat-template/DeepSeek-R1-0528/demo.py
ADDED
@@ -0,0 +1,75 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
"""
|
2 |
+
<|begin▁of▁sentence|><|User|>Hello, how are you?<|Assistant|>I'm doing great. How can I help you today?<|end▁of▁sentence|><|User|>I'd like to show off how chat templating works!
|
3 |
+
|
4 |
+
"""
|
5 |
+
|
6 |
+
from transformers import AutoTokenizer
|
7 |
+
|
8 |
+
from transformers import AutoTokenizer
|
9 |
+
# tokenizer = AutoTokenizer.from_pretrained("deepseek-ai/DeepSeek-V3")
|
10 |
+
|
11 |
+
# tokenizer = AutoTokenizer.from_pretrained("deepseek-ai/DeepSeek-R1")
|
12 |
+
tokenizer = AutoTokenizer.from_pretrained("deepseek-ai/DeepSeek-R1-0528")
|
13 |
+
"""
|
14 |
+
|
15 |
+
"""
|
16 |
+
|
17 |
+
chat = [
|
18 |
+
{"role": "system", "content": "you are a helpful assistant."},
|
19 |
+
{"role": "user", "content": "Hello, how are you?"},
|
20 |
+
{"role": "assistant", "content": "I'm doing great. How can I help you today?"},
|
21 |
+
{"role": "user", "content": "I'd like to show off how chat templating works!"},
|
22 |
+
{"role": "tool", "content": "789"},
|
23 |
+
{"role": "assistant", "content": "123"},
|
24 |
+
{"role": "user", "content": "456"},
|
25 |
+
{"role": "tool", "content": "awq"},
|
26 |
+
|
27 |
+
]
|
28 |
+
|
29 |
+
|
30 |
+
# chat = [
|
31 |
+
|
32 |
+
# {"role": "user", "content": "Hello, how are you?"},
|
33 |
+
# {"role": "assistant", "content": "<think>i am thinking</think>I'm doing great. How can I help you today?"},
|
34 |
+
# {"role": "user", "content": "I'd like to show off how chat templating works!"},
|
35 |
+
# ]
|
36 |
+
|
37 |
+
def get_weather(location: str, unit: str):
|
38 |
+
"""
|
39 |
+
Get the current weather in a given location.
|
40 |
+
|
41 |
+
Args:
|
42 |
+
location: The city and state, e.g., 'San Francisco, CA'.
|
43 |
+
unit: The unit of temperature, either 'celsius' or 'fahrenheit'.
|
44 |
+
|
45 |
+
Returns:
|
46 |
+
str: The current weather in the given location.
|
47 |
+
"""
|
48 |
+
return f"Getting the weather for {location} in {unit}..."
|
49 |
+
|
50 |
+
|
51 |
+
prompt = tokenizer.apply_chat_template(chat, tools=[get_weather], tokenize=False)
|
52 |
+
|
53 |
+
|
54 |
+
print(prompt)
|
55 |
+
|
56 |
+
aad
|
57 |
+
|
58 |
+
prompt_ids = tokenizer.apply_chat_template(chat)
|
59 |
+
print(prompt_ids)
|
60 |
+
|
61 |
+
"""
|
62 |
+
<|begin▁of▁sentence|>you are a helpful assistant.<|User|>Hello, how are you?<|Assistant|>I'm doing great. How can I help you today?<|end▁of▁sentence|><|User|>I'd like to show off how chat templating works!<|Assistant|>123<|end▁of▁sentence|><|User|>456
|
63 |
+
|
64 |
+
|
65 |
+
[0, 12829, 477, 260, 11502, 22896, 16, 128803, 19923, 14, 1192, 477, 440, 33, 128804, 43, 4571, 4843, 2405, 16, 1730, 588, 342, 1694, 440, 4316, 33, 1, 128803, 43, 7485, 1277, 304, 1801, 1375, 1192, 20297, 12202, 1217, 2984, 3, 128804, 6895, 1, 128803, 18009]
|
66 |
+
|
67 |
+
0: <|begin▁of▁sentence|>
|
68 |
+
1: <|end▁of▁sentence|>
|
69 |
+
128803: <|User|>
|
70 |
+
128804: <|Assistant|>
|
71 |
+
"""
|
72 |
+
|
73 |
+
text = tokenizer.decode([0, 15061, 14928, 35895, 23379, 303, 2788, 35895, 14928, 4, 844, 60949, 4, 24415, 27318, 478, 7625, 34092, 7524, 14928])
|
74 |
+
|
75 |
+
print(text)
|
doc/chat-template/Hermes-3-Llama-3.1-405B/README.md
ADDED
@@ -0,0 +1,64 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
## tool_call
|
2 |
+
|
3 |
+
支持多个tool,
|
4 |
+
|
5 |
+
|
6 |
+
|
7 |
+
## llama3.1-405b
|
8 |
+
|
9 |
+
```yml
|
10 |
+
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
|
11 |
+
|
12 |
+
Cutting Knowledge Date: December 2023
|
13 |
+
Today Date: 26 Jul 2024
|
14 |
+
|
15 |
+
You are a bot that responds to weather queries.<|eot_id|><|start_header_id|>user<|end_header_id|>
|
16 |
+
|
17 |
+
Hey, what's the temperature in Paris right now?<|eot_id|><|start_header_id|>assistant<|end_header_id|>
|
18 |
+
|
19 |
+
{"name": "get_current_temperature", "parameters": {"location": "Paris, France"}}<|eot_id|><|start_header_id|>ipython<|end_header_id|>
|
20 |
+
|
21 |
+
"22.0"<|eot_id|><|start_header_id|>assistant<|end_header_id|>
|
22 |
+
```
|
23 |
+
|
24 |
+
|
25 |
+
## hermes 示例
|
26 |
+
|
27 |
+
```sh
|
28 |
+
<|im_start|>system
|
29 |
+
You are a helpful assistant. The date today is 12/26/24.<|im_end|>
|
30 |
+
|
31 |
+
<|im_start|>user
|
32 |
+
who won the last womens singles wimbledon<|im_end|>
|
33 |
+
|
34 |
+
<|im_start|>assistant
|
35 |
+
<tool_call>
|
36 |
+
{"name": "tavily_search_results_json", "arguments": {"query": "last women's singles wimbledon winner"}}
|
37 |
+
</tool_call><|im_end|>
|
38 |
+
|
39 |
+
<|im_start|>tool
|
40 |
+
<tool_response>
|
41 |
+
[{"url": "https://en.wikipedia.org/wiki/List_of_Wimbledon_ladies'_singles_champions", "content": "***"}, {"url": "https://www.tennis-x.com/winners/womens/wimbledon.php", "content": "***"},...]
|
42 |
+
</tool_response><|im_end|>
|
43 |
+
|
44 |
+
<|im_start|>assistant
|
45 |
+
```
|
46 |
+
|
47 |
+
|
48 |
+
|
49 |
+
很好,
|
50 |
+
1. 简化了 llama的special token ( ):
|
51 |
+
- llama: `<|start_header_id|>user<|end_header_id|>`
|
52 |
+
- hermes: `<|im_start|>user`
|
53 |
+
2. print出来更好看
|
54 |
+
- llama: ss
|
55 |
+
- hermes: ss
|
56 |
+
3. tools:
|
57 |
+
- llama: tool统一用的 `ipython` 标识符;没有 `tool_response`
|
58 |
+
|
59 |
+
|
60 |
+
|
61 |
+
## TODO:
|
62 |
+
|
63 |
+
- 去掉日期
|
64 |
+
-
|
doc/chat-template/Hermes-3-Llama-3.1-405B/chat_template.default.jinja
ADDED
@@ -0,0 +1,6 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{{bos_token}}{% for message in messages %}{% if loop.first and messages[0]['role'] != 'system' %}{{ '<|im_start|>system
|
2 |
+
You are a helpful assistant.<|im_end|>
|
3 |
+
' }}{% endif %}{{'<|im_start|>' + message['role'] + '
|
4 |
+
' + message['content'] + '<|im_end|>' + '
|
5 |
+
'}}{% endfor %}{% if add_generation_prompt %}{{ '<|im_start|>assistant
|
6 |
+
' }}{% endif %}
|
doc/chat-template/Hermes-3-Llama-3.1-405B/chat_template.tool_use.jinja
ADDED
@@ -0,0 +1,152 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{%- macro json_to_python_type(json_spec) %}
|
2 |
+
{%- set basic_type_map = {
|
3 |
+
"string": "str",
|
4 |
+
"number": "float",
|
5 |
+
"integer": "int",
|
6 |
+
"boolean": "bool"
|
7 |
+
} %}
|
8 |
+
|
9 |
+
{%- if basic_type_map[json_spec.type] is defined %}
|
10 |
+
{{- basic_type_map[json_spec.type] }}
|
11 |
+
{%- elif json_spec.type == "array" %}
|
12 |
+
{{- "list[" + json_to_python_type(json_spec|items) + "]"}}
|
13 |
+
{%- elif json_spec.type == "object" %}
|
14 |
+
{%- if json_spec.additionalProperties is defined %}
|
15 |
+
{{- "dict[str, " + json_to_python_type(json_spec.additionalProperties) + ']'}}
|
16 |
+
{%- else %}
|
17 |
+
{{- "dict" }}
|
18 |
+
{%- endif %}
|
19 |
+
{%- elif json_spec.type is iterable %}
|
20 |
+
{{- "Union[" }}
|
21 |
+
{%- for t in json_spec.type %}
|
22 |
+
{{- json_to_python_type({"type": t}) }}
|
23 |
+
{%- if not loop.last %}
|
24 |
+
{{- "," }}
|
25 |
+
{%- endif %}
|
26 |
+
{%- endfor %}
|
27 |
+
{{- "]" }}
|
28 |
+
{%- else %}
|
29 |
+
{{- "Any" }}
|
30 |
+
{%- endif %}
|
31 |
+
{%- endmacro %}
|
32 |
+
|
33 |
+
|
34 |
+
{{- bos_token }}
|
35 |
+
{{- '<|im_start|>system
|
36 |
+
' }}
|
37 |
+
{{- "You are a function calling AI model. You are provided with function signatures within <tools></tools> XML tags. You may call one or more functions to assist with the user query. Don't make assumptions about what values to plug into functions. Here are the available tools: <tools> " }}
|
38 |
+
{%- for tool in tools %}
|
39 |
+
{%- if tool.function is defined %}
|
40 |
+
{%- set tool = tool.function %}
|
41 |
+
{%- endif %}
|
42 |
+
{{- '{"type": "function", "function": ' }}
|
43 |
+
{{- '{"name": "' + tool.name + '", ' }}
|
44 |
+
{{- '"description": "' + tool.name + '(' }}
|
45 |
+
{%- for param_name, param_fields in tool.parameters.properties|items %}
|
46 |
+
{{- param_name + ": " + json_to_python_type(param_fields) }}
|
47 |
+
{%- if not loop.last %}
|
48 |
+
{{- ", " }}
|
49 |
+
{%- endif %}
|
50 |
+
{%- endfor %}
|
51 |
+
{{- ")" }}
|
52 |
+
{%- if tool.return is defined %}
|
53 |
+
{{- " -> " + json_to_python_type(tool.return) }}
|
54 |
+
{%- endif %}
|
55 |
+
{{- " - " + tool.description + "
|
56 |
+
|
57 |
+
" }}
|
58 |
+
{%- for param_name, param_fields in tool.parameters.properties|items %}
|
59 |
+
{%- if loop.first %}
|
60 |
+
{{- " Args:
|
61 |
+
" }}
|
62 |
+
{%- endif %}
|
63 |
+
{{- " " + param_name + "(" + json_to_python_type(param_fields) + "): " + param_fields.description|trim }}
|
64 |
+
{%- endfor %}
|
65 |
+
{%- if tool.return is defined and tool.return.description is defined %}
|
66 |
+
{{- "
|
67 |
+
Returns:
|
68 |
+
" + tool.return.description }}
|
69 |
+
{%- endif %}
|
70 |
+
{{- '"' }}
|
71 |
+
{{- ', "parameters": ' }}
|
72 |
+
{%- if tool.parameters.properties | length == 0 %}
|
73 |
+
{{- "{}" }}
|
74 |
+
{%- else %}
|
75 |
+
{{- tool.parameters|tojson }}
|
76 |
+
{%- endif %}
|
77 |
+
{{- "}" }}
|
78 |
+
{%- if not loop.last %}
|
79 |
+
{{- "
|
80 |
+
" }}
|
81 |
+
{%- endif %}
|
82 |
+
{%- endfor %}
|
83 |
+
{{- " </tools>" }}
|
84 |
+
{{- 'Use the following pydantic model json schema for each tool call you will make: {"properties": {"name": {"title": "Name", "type": "string"}, "arguments": {"title": "Arguments", "type": "object"}}, "required": ["name", "arguments"], "title": "FunctionCall", "type": "object"}}
|
85 |
+
' }}
|
86 |
+
{{- "For each function call return a json object with function name and arguments within <tool_call></tool_call> XML tags as follows:
|
87 |
+
" }}
|
88 |
+
{{- "<tool_call>
|
89 |
+
" }}
|
90 |
+
{{- '{"name": <function-name>, "arguments": <args-dict>}
|
91 |
+
' }}
|
92 |
+
{{- '</tool_call><|im_end|>
|
93 |
+
' }}
|
94 |
+
{%- for message in messages %}
|
95 |
+
{%- if message.role == "user" or message.role == "system" or (message.role == "assistant" and message.tool_calls is not defined) %}
|
96 |
+
{{- '<|im_start|>' + message.role + '
|
97 |
+
' + message.content + '<|im_end|>' + '
|
98 |
+
' }}
|
99 |
+
{%- elif message.role == "assistant" %}
|
100 |
+
{{- '<|im_start|>' + message.role }}
|
101 |
+
{%- for tool_call in message.tool_calls %}
|
102 |
+
{{- '
|
103 |
+
<tool_call>
|
104 |
+
' }} {%- if tool_call.function is defined %}
|
105 |
+
{%- set tool_call = tool_call.function %}
|
106 |
+
{%- endif %}
|
107 |
+
{{- '{' }}
|
108 |
+
{{- '"name": "' }}
|
109 |
+
{{- tool_call.name }}
|
110 |
+
{{- '"' }}
|
111 |
+
{{- ', '}}
|
112 |
+
{%- if tool_call.arguments is defined %}
|
113 |
+
{{- '"arguments": ' }}
|
114 |
+
{%- if tool_call.arguments is string %}
|
115 |
+
{{- tool_call.arguments }}
|
116 |
+
{%- else %}
|
117 |
+
{{- tool_call.arguments|tojson }}
|
118 |
+
{%- endif %}
|
119 |
+
{%- endif %}
|
120 |
+
{{- '}' }}
|
121 |
+
{{- '
|
122 |
+
</tool_call>' }}
|
123 |
+
{%- endfor %}
|
124 |
+
{{- '<|im_end|>
|
125 |
+
' }}
|
126 |
+
{%- elif message.role == "tool" %}
|
127 |
+
{%- if loop.previtem and loop.previtem.role != "tool" %}
|
128 |
+
{{- '<|im_start|>tool
|
129 |
+
' }}
|
130 |
+
{%- endif %}
|
131 |
+
{{- '<tool_response>
|
132 |
+
' }}
|
133 |
+
{{- message.content }}
|
134 |
+
{%- if not loop.last %}
|
135 |
+
{{- '
|
136 |
+
</tool_response>
|
137 |
+
' }}
|
138 |
+
{%- else %}
|
139 |
+
{{- '
|
140 |
+
</tool_response>' }}
|
141 |
+
{%- endif %}
|
142 |
+
{%- if not loop.last and loop.nextitem.role != "tool" %}
|
143 |
+
{{- '<|im_end|>' }}
|
144 |
+
{%- elif loop.last %}
|
145 |
+
{{- '<|im_end|>' }}
|
146 |
+
{%- endif %}
|
147 |
+
{%- endif %}
|
148 |
+
{%- endfor %}
|
149 |
+
{%- if add_generation_prompt %}
|
150 |
+
{{- '<|im_start|>assistant
|
151 |
+
' }}
|
152 |
+
{%- endif %}
|
doc/chat-template/Llama-3.1-405B-Instruct/README.md
CHANGED
@@ -1,23 +0,0 @@
|
|
1 |
-
|
2 |
-
|
3 |
-
|
4 |
-
## tool 示例
|
5 |
-
|
6 |
-
```yml
|
7 |
-
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
|
8 |
-
|
9 |
-
Cutting Knowledge Date: December 2023
|
10 |
-
Today Date: 26 Jul 2024
|
11 |
-
|
12 |
-
You are a bot that responds to weather queries.<|eot_id|><|start_header_id|>user<|end_header_id|>
|
13 |
-
|
14 |
-
Hey, what's the temperature in Paris right now?<|eot_id|><|start_header_id|>assistant<|end_header_id|>
|
15 |
-
|
16 |
-
{"name": "get_current_temperature", "parameters": {"location": "Paris, France"}}<|eot_id|><|start_header_id|>ipython<|end_header_id|>
|
17 |
-
|
18 |
-
"22.0"<|eot_id|><|start_header_id|>assistant<|end_header_id|>
|
19 |
-
```
|
20 |
-
|
21 |
-
|
22 |
-
|
23 |
-
缺陷:见 hermes/README.md
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
doc/chat-template/Llama-3.1-405B-Instruct/{chat_template.md → chat_template.jinja}
RENAMED
@@ -1,8 +1,3 @@
|
|
1 |
-
|
2 |
-
|
3 |
-
## chat_template
|
4 |
-
|
5 |
-
```php
|
6 |
{{- bos_token }}
|
7 |
{%- if custom_tools is defined %}
|
8 |
{%- set tools = custom_tools %}
|
@@ -112,4 +107,3 @@
|
|
112 |
{%- if add_generation_prompt %}
|
113 |
{{- '<|start_header_id|>assistant<|end_header_id|>\n\n' }}
|
114 |
{%- endif %}
|
115 |
-
```
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
{{- bos_token }}
|
2 |
{%- if custom_tools is defined %}
|
3 |
{%- set tools = custom_tools %}
|
|
|
107 |
{%- if add_generation_prompt %}
|
108 |
{{- '<|start_header_id|>assistant<|end_header_id|>\n\n' }}
|
109 |
{%- endif %}
|
|
doc/chat-template/Llama-3.1-405B-Instruct/demo.py
DELETED
@@ -1,20 +0,0 @@
|
|
1 |
-
|
2 |
-
|
3 |
-
|
4 |
-
from transformers import AutoTokenizer
|
5 |
-
|
6 |
-
tokenizer = AutoTokenizer.from_pretrained("meta-llama/Meta-Llama-3.1-405B", use_fast=False)
|
7 |
-
|
8 |
-
|
9 |
-
messages = [
|
10 |
-
{"role": "user", "content": "你好"},
|
11 |
-
{"role": "assistant", "content": "good"},
|
12 |
-
]
|
13 |
-
|
14 |
-
|
15 |
-
tokenizer.apply_chat_template(messages, )
|
16 |
-
|
17 |
-
|
18 |
-
|
19 |
-
# print(token_id, decoding)
|
20 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
doc/chat-template/Llama-3.1-405B-Instruct/generate.py
DELETED
@@ -1,24 +0,0 @@
|
|
1 |
-
|
2 |
-
|
3 |
-
import transformers
|
4 |
-
import torch
|
5 |
-
|
6 |
-
model_id = "/workspace/czy/model_weights/Meta-Llama-3.1-8B-Instruct/"
|
7 |
-
|
8 |
-
pipeline = transformers.pipeline(
|
9 |
-
"text-generation",
|
10 |
-
model=model_id,
|
11 |
-
model_kwargs={"torch_dtype": torch.bfloat16},
|
12 |
-
device_map="auto",
|
13 |
-
)
|
14 |
-
|
15 |
-
messages = [
|
16 |
-
{"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
|
17 |
-
{"role": "user", "content": "Who are you?"},
|
18 |
-
]
|
19 |
-
|
20 |
-
outputs = pipeline(
|
21 |
-
messages,
|
22 |
-
max_new_tokens=256,
|
23 |
-
)
|
24 |
-
print(outputs[0]["generated_text"][-1])
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
doc/chat-template/export_chat_template.py
ADDED
@@ -0,0 +1,35 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
import os
|
4 |
+
import json
|
5 |
+
from transformers import AutoTokenizer
|
6 |
+
from transformers.utils import get_json_schema
|
7 |
+
|
8 |
+
|
9 |
+
# MODEL_PATH = "meta-llama/Llama-3.1-405B-Instruct"
|
10 |
+
MODEL_PATH = "NousResearch/Hermes-3-Llama-3.1-405B" # messages里不支持tool_calls
|
11 |
+
# MODEL_PATH = "../../test/Llama-4-Maverick-17B-128E-Instruct/"
|
12 |
+
# MODEL_PATH = "meta-llama/Llama-4-Maverick-17B-128E-Instruct"
|
13 |
+
# MODEL_PATH = "Qwen/Qwen3-235B-A22B-Instruct-2507"
|
14 |
+
# MODEL_PATH = "mistralai/Mistral-7B-Instruct-v0.1" # messages里不支持tool_calls,不支持 role=tool,不支持 tools
|
15 |
+
# MODEL_PATH = "mistralai/Ministral-8B-Instruct-2410" # 支持 tools, 支持tool_calls(必须要有id), 格式非主流
|
16 |
+
MODEL_PATH = "deepseek-ai/DeepSeek-R1"
|
17 |
+
# MODEL_PATH = "deepseek-ai/DeepSeek-R1-0528"
|
18 |
+
tokenizer = AutoTokenizer.from_pretrained(MODEL_PATH)
|
19 |
+
chat_template = tokenizer.chat_template
|
20 |
+
|
21 |
+
|
22 |
+
output_dir = MODEL_PATH.split("/")[-1]
|
23 |
+
|
24 |
+
os.makedirs(output_dir, exist_ok=True)
|
25 |
+
if isinstance(chat_template, dict):
|
26 |
+
for k, v in chat_template.items():
|
27 |
+
with open(f"{output_dir}/chat_template.{k}.jinja", "w") as f_out:
|
28 |
+
f_out.write(v)
|
29 |
+
else:
|
30 |
+
# chat_template = chat_template.replace("\\n", "\n")
|
31 |
+
with open(f"{output_dir}/chat_template.jinja", "w") as f_out:
|
32 |
+
f_out.write(chat_template)
|
33 |
+
|
34 |
+
|
35 |
+
|
doc/chat-template/tool_demo.py
CHANGED
@@ -16,12 +16,13 @@ from transformers import AutoTokenizer
|
|
16 |
from transformers.utils import get_json_schema
|
17 |
|
18 |
|
19 |
-
MODEL_PATH = "meta-llama/Llama-3.1-405B-Instruct"
|
20 |
-
MODEL_PATH = "NousResearch/Hermes-3-Llama-3.1-405B" # messages里不支持tool_calls
|
21 |
-
MODEL_PATH = "
|
|
|
22 |
MODEL_PATH = "Qwen/Qwen3-235B-A22B-Instruct-2507"
|
23 |
# MODEL_PATH = "mistralai/Mistral-7B-Instruct-v0.1" # messages里不支持tool_calls,不支持 role=tool,不支持 tools
|
24 |
-
MODEL_PATH = "mistralai/Ministral-8B-Instruct-2410" # 支持 tools, 支持tool_calls(必须要有id), 格式非主流
|
25 |
tokenizer = AutoTokenizer.from_pretrained(MODEL_PATH)
|
26 |
|
27 |
# First, define a tool
|
@@ -39,7 +40,9 @@ def get_current_temperature(location: str) -> float:
|
|
39 |
# Next, create a chat and apply the chat template
|
40 |
messages = [
|
41 |
{"role": "system", "content": "You are a bot that responds to weather queries."},
|
42 |
-
{"role": "user", "content": "Hey, what's the temperature in Paris right now?"}
|
|
|
|
|
43 |
]
|
44 |
|
45 |
# step1:
|
|
|
16 |
from transformers.utils import get_json_schema
|
17 |
|
18 |
|
19 |
+
# MODEL_PATH = "meta-llama/Llama-3.1-405B-Instruct"
|
20 |
+
# MODEL_PATH = "NousResearch/Hermes-3-Llama-3.1-405B" # messages里不支持tool_calls
|
21 |
+
# MODEL_PATH = "../../test/Llama-4-Maverick-17B-128E-Instruct/"
|
22 |
+
# MODEL_PATH = "meta-llama/Llama-4-Maverick-17B-128E-Instruct"
|
23 |
MODEL_PATH = "Qwen/Qwen3-235B-A22B-Instruct-2507"
|
24 |
# MODEL_PATH = "mistralai/Mistral-7B-Instruct-v0.1" # messages里不支持tool_calls,不支持 role=tool,不支持 tools
|
25 |
+
# MODEL_PATH = "mistralai/Ministral-8B-Instruct-2410" # 支持 tools, 支持tool_calls(必须要有id), 格式非主流
|
26 |
tokenizer = AutoTokenizer.from_pretrained(MODEL_PATH)
|
27 |
|
28 |
# First, define a tool
|
|
|
40 |
# Next, create a chat and apply the chat template
|
41 |
messages = [
|
42 |
{"role": "system", "content": "You are a bot that responds to weather queries."},
|
43 |
+
{"role": "user", "content": "Hey, what's the temperature in Paris right now?"},
|
44 |
+
{"role": "assitant", "content": "test1"},
|
45 |
+
{"role": "user", "content": "test2"},
|
46 |
]
|
47 |
|
48 |
# step1:
|
doc/chat-template/tools_and_llm_response.md
CHANGED
@@ -24,8 +24,8 @@ Respond in the format [func_name1(params_name1=params_value1, params_name2=param
|
|
24 |
|
25 |
## llm入参:messages and tools
|
26 |
|
27 |
-
```
|
28 |
-
[
|
29 |
{
|
30 |
"role": "system",
|
31 |
"content": "You are a bot that responds to weather queries."
|
@@ -35,6 +35,7 @@ Respond in the format [func_name1(params_name1=params_value1, params_name2=param
|
|
35 |
"content": "Hey, what's the temperature in Paris right now?"
|
36 |
}
|
37 |
]
|
|
|
38 |
```
|
39 |
|
40 |
`json_schema` of tools
|
@@ -110,9 +111,13 @@ Respond in the format {"name": function name, "parameters": dictionary of argume
|
|
110 |
Hey, what's the temperature in Paris right now?<|eot_id|><|start_header_id|>assistant<|end_header_id|>
|
111 |
```
|
112 |
|
113 |
-
- **入参**:
|
114 |
-
-
|
115 |
-
|
|
|
|
|
|
|
|
|
116 |
|
117 |
|
118 |
|
@@ -140,8 +145,9 @@ Hey, what's the temperature in Paris right now?<|im_end|>
|
|
140 |
```
|
141 |
|
142 |
|
143 |
-
- **入参**:
|
144 |
-
|
|
|
145 |
- **出参**: 返回的`respone` 要求是 `<tool_call>` 包裹的json
|
146 |
`return a json object with function name and arguments within <tool_call></tool_call> XML tags as follows: <tool_call>{"name": <function-name>, "arguments": <args-dict>}</tool_call>`
|
147 |
|
@@ -149,6 +155,47 @@ Hey, what's the temperature in Paris right now?<|im_end|>
|
|
149 |
## llama3.2
|
150 |
|
151 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
152 |
|
153 |
## mistralai/Ministral-8B-Instruct-2410
|
154 |
|
@@ -164,4 +211,33 @@ Hey, what's the temperature in Paris right now?[/INST]
|
|
164 |
|
165 |
|
166 |
|
167 |
-
##
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
24 |
|
25 |
## llm入参:messages and tools
|
26 |
|
27 |
+
```py
|
28 |
+
messages = [
|
29 |
{
|
30 |
"role": "system",
|
31 |
"content": "You are a bot that responds to weather queries."
|
|
|
35 |
"content": "Hey, what's the temperature in Paris right now?"
|
36 |
}
|
37 |
]
|
38 |
+
# chat_completion = client.chat.completions.create(messages=messages, model=model, tools=tools) # 可以这样调用 LLM
|
39 |
```
|
40 |
|
41 |
`json_schema` of tools
|
|
|
111 |
Hey, what's the temperature in Paris right now?<|eot_id|><|start_header_id|>assistant<|end_header_id|>
|
112 |
```
|
113 |
|
114 |
+
- **入参**:
|
115 |
+
- **tools格式**: 支持的工具列表(`tools`) 是基于 json-schema 的,因为chat_template中采用的是 [`tojson(indent=4)`](https://github.com/vllm-project/vllm/blob/v0.10.1/examples/tool_chat_template_llama3.1_json.jinja#L48)
|
116 |
+
- **tools在prompt中的位置**: 拼在第一个user轮内容的前面。(system不变,其他轮message也不变)
|
117 |
+
|
118 |
+
- **出参**:
|
119 |
+
- **出参格式**: 返回的`respone` 要求是 json 格式,`respond with a JSON ... Respond in the format {"name": function name, "parameters": dictionary of argument name and its value}`
|
120 |
+
- **出参的工具解析**: ?
|
121 |
|
122 |
|
123 |
|
|
|
145 |
```
|
146 |
|
147 |
|
148 |
+
- **入参**:
|
149 |
+
- **tools格式**: 支持的工具列表(`tools`) 是基于自定义格式的,详见 [chat_template](https://github.com/vllm-project/vllm/blob/v0.10.1/examples/tool_chat_template_hermes.jinja#L41)
|
150 |
+
- **tools在prompt中的位置**: 额外增加了一个`system`轮,放在最前面。(用户设置的`system`保持不变)
|
151 |
- **出参**: 返回的`respone` 要求是 `<tool_call>` 包裹的json
|
152 |
`return a json object with function name and arguments within <tool_call></tool_call> XML tags as follows: <tool_call>{"name": <function-name>, "arguments": <args-dict>}</tool_call>`
|
153 |
|
|
|
155 |
## llama3.2
|
156 |
|
157 |
|
158 |
+
## llama4
|
159 |
+
|
160 |
+
```sh
|
161 |
+
<|begin_of_text|><|header_start|>system<|header_end|>
|
162 |
+
|
163 |
+
Environment: ipython
|
164 |
+
You are a bot that responds to weather queries.<|eot|><|header_start|>user<|header_end|>
|
165 |
+
|
166 |
+
Given the following functions, please respond with a JSON for a function call with its proper arguments that best answers the given prompt.
|
167 |
+
|
168 |
+
Respond in the format {"name": function name, "parameters": dictionary of argument name and its value}.Do not use variables.
|
169 |
+
|
170 |
+
{
|
171 |
+
"type": "function",
|
172 |
+
"function": {
|
173 |
+
"name": "get_current_temperature",
|
174 |
+
"description": "Get the current temperature at a location.",
|
175 |
+
"parameters": {
|
176 |
+
"type": "object",
|
177 |
+
"properties": {
|
178 |
+
"location": {
|
179 |
+
"type": "string",
|
180 |
+
"description": "The location to get the temperature for, in the format \"City, Country\""
|
181 |
+
}
|
182 |
+
},
|
183 |
+
"required": [
|
184 |
+
"location"
|
185 |
+
]
|
186 |
+
},
|
187 |
+
"return": {
|
188 |
+
"type": "number",
|
189 |
+
"description": "The current temperature at the specified location in the specified units, as a float."
|
190 |
+
}
|
191 |
+
}
|
192 |
+
}
|
193 |
+
|
194 |
+
Hey, what's the temperature in Paris right now?<|eot|><|header_start|>assistant<|header_end|>
|
195 |
+
```
|
196 |
+
|
197 |
+
跟 llama3.1差不多,只是少了`date`,并且换了`special_token`。(同样拼在第一个user轮)
|
198 |
+
|
199 |
|
200 |
## mistralai/Ministral-8B-Instruct-2410
|
201 |
|
|
|
211 |
|
212 |
|
213 |
|
214 |
+
## qwen3
|
215 |
+
|
216 |
+
```sh
|
217 |
+
<|im_start|>system
|
218 |
+
You are a bot that responds to weather queries.
|
219 |
+
|
220 |
+
# Tools
|
221 |
+
|
222 |
+
You may call one or more functions to assist with the user query.
|
223 |
+
|
224 |
+
You are provided with function signatures within <tools></tools> XML tags:
|
225 |
+
<tools>
|
226 |
+
{"type": "function", "function": {"name": "get_current_temperature", "description": "Get the current temperature at a location.", "parameters": {"type": "object", "properties": {"location": {"type": "string", "description": "The location to get the temperature for, in the format \"City, Country\""}}, "required": ["location"]}, "return": {"type": "number", "description": "The current temperature at the specified location in the specified units, as a float."}}}
|
227 |
+
</tools>
|
228 |
+
|
229 |
+
For each function call, return a json object with function name and arguments within <tool_call></tool_call> XML tags:
|
230 |
+
<tool_call>
|
231 |
+
{"name": <function-name>, "arguments": <args-json-object>}
|
232 |
+
</tool_call><|im_end|>
|
233 |
+
<|im_start|>user
|
234 |
+
Hey, what's the temperature in Paris right now?<|im_end|>
|
235 |
+
<|im_start|>assistant
|
236 |
+
```
|
237 |
+
|
238 |
+
|
239 |
+
- **入参**:
|
240 |
+
- **tools格式**: 支持的工具列表(`tools`) 是基于 json-schema 的.
|
241 |
+
- **tools在prompt中的位置**: 拼接到原始`system`的结尾。
|
242 |
+
- **出参**: 返回的`respone` 要求是 `<tool_call>` 包裹的json
|
243 |
+
`return a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{"name": <function-name>, "arguments": <args-json-object>}\n</tool_call>`
|