Parkerlambert123 commited on
Commit
fa970f2
·
verified ·
1 Parent(s): e62d95c

Upload folder using huggingface_hub

Browse files
.gitattributes CHANGED
@@ -33,3 +33,6 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ images/general_score.png filter=lfs diff=lfs merge=lfs -text
37
+ images/writingbench_score.png filter=lfs diff=lfs merge=lfs -text
38
+ tokenizer.json filter=lfs diff=lfs merge=lfs -text
README.md CHANGED
@@ -1,3 +1,202 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ datasets:
4
+ - Congliu/Chinese-DeepSeek-R1-Distill-data-110k
5
+ - cognitivecomputations/dolphin-r1
6
+ - open-thoughts/OpenThoughts-114k
7
+ - qihoo360/Light-R1-SFTData
8
+ - qihoo360/Light-R1-DPOData
9
+ language:
10
+ - zh
11
+ - en
12
+ base_model:
13
+ - Zhihu-ai/Zhi-writing-dsr1-14b
14
+ tags:
15
+ - qwen2
16
+ ---
17
+ # Zhi-writing-dsr1-14b
18
+
19
+ ## 1. Introduction
20
+
21
+ Zhi-writing-dsr1-14b is a fine-tuned model based on DeepSeek-R1-Distill-Qwen-14B, specifically optimized for enhanced creative writing capabilities. Several benchmark evaluations indicate the model's improved creative writing performance.
22
+
23
+ In the [LLM Creative Story-Writing Benchmark](https://github.com/lechmazur/writing), the model achieved a score of **8.33** compared to its base model's **7.8**. In the [WritingBench](https://github.com/X-PLUG/WritingBench) evaluation framework, it scored **8.46**, showing improvement over DeepSeek-R1-Distill-Qwen-14B's **7.93**. The model was also evaluated using GPT-4o on the AlpacaEval dataset, achieving an **82.6%** win rate when compared with the base model.
24
+
25
+ The figure below shows the performance comparison across different domains in WritingBench:
26
+
27
+ ![writingbench](./images/writingbench_score.png)
28
+
29
+ <figcaption style="text-align:center; font-size:0.9em; color:#666">
30
+ Figure 1: WritingBench performance of Zhi-writing-dsr1-14b and DeepSeek-R1-Distill-Qwen-14B across 6 domains and 3 writing requirements evaluated with WritingBench critic model (scale: 1-10). The six domains include: (D1) Academic & Engineering, (D2) Finance & Business, (D3) Politics & Law, (D4) Literature & Art, (D5) Education, and (D6) Advertising & Marketing. The three writing requirements assessed are: (R1) Style, (R2) Format, and (R3) Length. Here, "C" indicates category-specific scores.
31
+ </figcaption>
32
+
33
+ ## 2. Training Process
34
+
35
+ ### Data
36
+
37
+ The model's training corpus comprises three primary data sources: rigorously filtered open-source datasets, chain-of-thought reasoning corpora, and curated question-answer pairs from Zhihu.
38
+
39
+ To achieve optimal domain coverage, we meticulously balanced the distribution of various datasets, including [Dolphin-r1](https://huggingface.co/datasets/cognitivecomputations/dolphin-r1), [Congliu/Chinese-DeepSeek-R1-Distill-data-110k](https://huggingface.co/datasets/Congliu/Chinese-DeepSeek-R1-Distill-data-110k), [OpenThoughts-114k](https://huggingface.co/datasets/open-thoughts/OpenThoughts-114k), [Light-R1-SFTData](https://huggingface.co/datasets/qihoo360/Light-R1-SFTData), and [Light-R1-DPOData](https://huggingface.co/datasets/qihoo360/Light-R1-DPOData), alongside high-quality content from Zhihu. All datasets underwent comprehensive quality assurance through our Reward Model (RM) filtering pipeline.
40
+
41
+ ### Training
42
+ **Supervised Fine-tuning (SFT)**: We employed a curriculum learning strategy for supervised fine-tuning. This methodical approach systematically enhances creative writing capabilities while incorporating diverse domain data to maintain core competencies and mitigate catastrophic forgetting.
43
+
44
+ **Direct Preference Optimization (DPO)**: For scenarios involving minimal edit distances, we utilized Step-DPO ([arxiv:2406.18629](https://arxiv.org/abs/2406.18629)) to selectively penalize incorrect tokens, while incorporating positive constraints in the loss function as proposed in DPOP ([arXiv:2402.13228](https://arxiv.org/abs/2402.13228)).
45
+
46
+ ## 3. Evaluation Results
47
+
48
+ Our evaluation results suggest promising improvements in the model's creative writing capabilities. In the LLM Creative Story-Writing Benchmark evaluation, the model achieved a score of **8.33**, showing an improvement from the base model's **7.87**. When assessed on WritingBench, a comprehensive framework for evaluating large language model writing abilities, the model attained a score of **8.46**. This places it in proximity to DeepSeek-R1's performance and represents an advancement over DeepSeek-R1-Distill-Qwen-14B's score of 7.93.
49
+
50
+ With respect to general capabilities, evaluations indicate modest improvements of **2%–5% in knowledge and reasoning tasks (CMMLU, MMLU-Pro)**, alongside encouraging progress in mathematical reasoning as measured by benchmarks such as **AIME-2024, AIME-2025, and GSM8K**. The results suggest that the model maintains a balanced performance profile, with improvements observed across creative writing, knowledge/reasoning, and mathematical tasks compared to DeepSeek-R1-Distill-Qwen-14B. These characteristics potentially make it suitable for a range of general-purpose applications.
51
+
52
+ ![general](./images/general_score.png)
53
+
54
+
55
+ ## 4. How to Run Locally
56
+
57
+ Zhi-writing-dsr1-14b can be deployed on various hardware configurations, including GPUs with 80GB memory, a single H20/A800/H800, or dual RTX 4090. Additionally, the INT4 quantized version Zhi-writing-dsr1-14b-gptq-int4 can be deployed on a single RTX 4090.
58
+
59
+ ### Transformers
60
+
61
+ ```python
62
+ from transformers import AutoModelForCausalLM, AutoTokenizer
63
+ from transformers.generation import GenerationConfig
64
+
65
+ MODEL_NAME = "Zhihu-ai/Zhi-writing-dsr1-14b"
66
+ tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME, trust_remote_code=True)
67
+
68
+ # use bf16
69
+ # model = AutoModelForCausalLM.from_pretrained(MODEL_NAME, device_map="auto", trust_remote_code=True, bf16=True).eval()
70
+ # use fp16
71
+ # model = AutoModelForCausalLM.from_pretrained(MODEL_NAME, device_map="auto", trust_remote_code=True, fp16=True).eval()
72
+ # use cpu only
73
+ # model = AutoModelForCausalLM.from_pretrained(MODEL_NAME, device_map="cpu", trust_remote_code=True).eval()
74
+ # use auto mode, automatically select precision based on the device.
75
+ model = AutoModelForCausalLM.from_pretrained(
76
+ MODEL_NAME,
77
+ device_map="auto",
78
+ trust_remote_code=True
79
+ ).eval()
80
+
81
+ # Specify hyperparameters for generation. But if you use transformers>=4.32.0, there is no need to do this.
82
+ # model.generation_config = GenerationConfig.from_pretrained(MODEL_NAME, trust_remote_code=True)
83
+
84
+ generate_configs = {
85
+ "temperature": 0.6,
86
+ "do_sample": True,
87
+ "top_p": 0.95,
88
+ "max_new_tokens": 4096
89
+ }
90
+
91
+ prompt = "请你以鲁迅的口吻,写一篇介绍西湖醋鱼的文章"
92
+ messages = [
93
+ {"role": "user", "content": prompt}
94
+ ]
95
+ text = tokenizer.apply_chat_template(
96
+ messages,
97
+ tokenize=False,
98
+ add_generation_prompt=True
99
+ )
100
+
101
+ model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
102
+
103
+ generated_ids = model.generate(
104
+ **model_inputs,
105
+ **generate_configs
106
+ )
107
+ generated_ids = [
108
+ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
109
+ ]
110
+
111
+ response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
112
+ print(response)
113
+ ```
114
+
115
+ ### vllm
116
+ For instance, you can easily start a service using [vLLM](https://github.com/vllm-project/vllm)
117
+
118
+ ```python
119
+ # install vllm
120
+ pip install vllm>=0.6.4.post1
121
+
122
+ # huggingface model id
123
+ vllm serve Zhihu-ai/Zhi-writing-dsr1-14b --served-model-name Zhi-writing-dsr1-14b --port 8000
124
+
125
+ # local path
126
+ vllm serve /path/to/model --served-model-name Zhi-writing-dsr1-14b --port 8000
127
+
128
+ curl http://localhost:8000/v1/completions \
129
+ -H "Content-Type: application/json" \
130
+ -d '{
131
+ "model": "Zhi-writing-dsr1-14b",
132
+ "prompt": "请你以鲁迅的口吻,写一篇介绍西湖醋鱼的文章",
133
+ "max_tokens": 4096,
134
+ "temperature": 0.6,
135
+ "top_p": 0.95
136
+ }'
137
+ ```
138
+
139
+
140
+ ### SGLang
141
+
142
+ You can also easily start a service using [SGLang](https://github.com/sgl-project/sglang)
143
+ ```python
144
+ # install SGLang
145
+ pip install "sglang[all]>=0.4.5" --find-links https://flashinfer.ai/whl/cu124/torch2.5/flashinfer-python
146
+
147
+ # huggingface model id
148
+ python -m sglang.launch_server --model-path Zhi-writing-dsr1-14b --served-model-name Zhi-writing-dsr1-14b --port 8000
149
+
150
+ # local path
151
+ python -m sglang.launch_server --model-path /path/to/model --served-model-name Zhi-writing-dsr1-14b --port 8000
152
+
153
+ # send request
154
+ curl http://localhost:8000/v1/completions \
155
+ -H "Content-Type: application/json" \
156
+ -d '{
157
+ "model": "Zhi-writing-dsr1-14b",
158
+ "prompt": "请你以鲁迅的口吻,写一篇介绍西湖醋鱼的文章",
159
+ "max_tokens": 4096,
160
+ "temperature": 0.6,
161
+ "top_p": 0.95
162
+ }'
163
+ ```
164
+
165
+ ### ollama
166
+
167
+ You can download ollama using [this](https://ollama.com/download/)
168
+ * quantization: Q4_K_M
169
+ ```bash
170
+ ollama run zhihu/zhi-writing-dsr1-14b
171
+ ```
172
+ * bf16
173
+ ```bash
174
+ ollama run zhihu/zhi-writing-dsr1-14b:bf16
175
+ ```
176
+
177
+ ## 5. Usage Recommendations
178
+
179
+ We recommend adhering to the following configurations when utilizing the Zhi-writing-dsr1-14b, including benchmarking, to achieve the expected performance:
180
+
181
+ * Set the temperature within the range of 0.5-0.7 (0.6 is recommended) to prevent endless repetitions or incoherent outputs.
182
+
183
+ * When evaluating model performance, it is recommended to conduct multiple tests and average the results. (We use `n=16` for mathematical tasks and `n=2` for others)
184
+
185
+ * To ensure that the model engages in thorough reasoning like DeepSeek-R1 series models, we recommend enforcing the model to initiate its response with "\<think\>\n" at the beginning of every output.
186
+
187
+ ## 6. Citation
188
+
189
+ ```text
190
+ @misc{Zhi-writing-dsr1-14b,
191
+ title={Zhi-writing-dsr1-14b: Curriculum Reinforcement and Direct Preference Optimization for Robust Creative Writing in LLMs},
192
+ author={Jiewu Wang, Xu Chen, Wenyuan Su, Chao Huang, Hongkui Gao, Lin Feng, Shan Wang, Lu Xu, Penghe Liu, Zebin Ou},
193
+ year={2025},
194
+ eprint={},
195
+ archivePrefix={},
196
+ url={https://huggingface.co/Zhihu-ai/Zhi-writing-dsr1-14b},
197
+ }
198
+ ```
199
+
200
+ ## 7. Contact
201
+
202
+ If you have any questions, please raise an issue or contact us at [[email protected]](mailto:[email protected]).
config.json ADDED
@@ -0,0 +1,47 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "Qwen2ForCausalLM"
4
+ ],
5
+ "attention_dropout": 0.0,
6
+ "bos_token_id": 151643,
7
+ "eos_token_id": 151643,
8
+ "hidden_act": "silu",
9
+ "hidden_size": 5120,
10
+ "initializer_range": 0.02,
11
+ "intermediate_size": 13824,
12
+ "max_position_embeddings": 131072,
13
+ "max_window_layers": 48,
14
+ "model_type": "qwen2",
15
+ "num_attention_heads": 40,
16
+ "num_hidden_layers": 48,
17
+ "num_key_value_heads": 8,
18
+ "pad_token_id": 151643,
19
+ "quantization_config": {
20
+ "bits": 4,
21
+ "block_name_to_quantize": "model.layers",
22
+ "checkpoint_format": "gptq",
23
+ "damp_percent": 0.1,
24
+ "desc_act": false,
25
+ "group_size": 128,
26
+ "meta": {
27
+ "quantizer": [
28
+ "optimum:1.24.0",
29
+ "auto_gptq:0.7.1"
30
+ ]
31
+ },
32
+ "modules_in_block_to_quantize": null,
33
+ "quant_method": "gptq",
34
+ "sym": true,
35
+ "true_sequential": true
36
+ },
37
+ "rms_norm_eps": 1e-05,
38
+ "rope_scaling": null,
39
+ "rope_theta": 1000000.0,
40
+ "sliding_window": null,
41
+ "tie_word_embeddings": false,
42
+ "torch_dtype": "float16",
43
+ "transformers_version": "4.50.0",
44
+ "use_cache": false,
45
+ "use_sliding_window": false,
46
+ "vocab_size": 152064
47
+ }
generation_config.json ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token_id": 151646,
3
+ "eos_token_id": 151643,
4
+ "pad_token_id": 151643,
5
+ "max_new_tokens": 4096,
6
+ "temperature": 0.6,
7
+ "top_p": 0.95,
8
+ "transformers_version": "4.49.0"
9
+ }
images/general_score.png ADDED

Git LFS Details

  • SHA256: 6425a8f2cc1efa5884a831117bcdb26bacce80bdc6e6b65a3a2da45aac77c8be
  • Pointer size: 131 Bytes
  • Size of remote file: 238 kB
images/writingbench_score.png ADDED

Git LFS Details

  • SHA256: b01adf23c1286556f77c88ec2c8f6e510ac70812951024023f8587d3d82160f3
  • Pointer size: 131 Bytes
  • Size of remote file: 148 kB
model-00001-of-00002.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7fb3e7047fef96067d38638f351c9ac739fcc7304b544cd30298b2a43e94284f
3
+ size 4997121424
model-00002-of-00002.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:64d3049455d0737f8337e2921a0e2038c4df51688986fbc2de14f2eb4218552f
3
+ size 4991638208
model.safetensors.index.json ADDED
The diff for this file is too large to render. See raw diff
 
quantize_config.json ADDED
@@ -0,0 +1,18 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bits": 4,
3
+ "group_size": 128,
4
+ "damp_percent": 0.1,
5
+ "desc_act": false,
6
+ "sym": true,
7
+ "true_sequential": true,
8
+ "quant_method": "gptq",
9
+ "modules_in_block_to_quantize": null,
10
+ "checkpoint_format": "gptq",
11
+ "meta": {
12
+ "quantizer": [
13
+ "optimum:1.24.0",
14
+ "auto_gptq:0.7.1"
15
+ ]
16
+ },
17
+ "block_name_to_quantize": "model.layers"
18
+ }
special_tokens_map.json ADDED
@@ -0,0 +1,23 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": {
3
+ "content": "<|begin▁of▁sentence|>",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "eos_token": {
10
+ "content": "<|end▁of▁sentence|>",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "pad_token": {
17
+ "content": "<|end▁of▁sentence|>",
18
+ "lstrip": false,
19
+ "normalized": false,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ }
23
+ }
tokenizer.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e20ddafc659ba90242154b55275402edeca0715e5dbb30f56815a4ce081f4893
3
+ size 11422778
tokenizer_config.json ADDED
@@ -0,0 +1,199 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_bos_token": true,
3
+ "add_eos_token": false,
4
+ "add_prefix_space": null,
5
+ "added_tokens_decoder": {
6
+ "151643": {
7
+ "content": "<|end▁of▁sentence|>",
8
+ "lstrip": false,
9
+ "normalized": false,
10
+ "rstrip": false,
11
+ "single_word": false,
12
+ "special": true
13
+ },
14
+ "151644": {
15
+ "content": "<|User|>",
16
+ "lstrip": false,
17
+ "normalized": false,
18
+ "rstrip": false,
19
+ "single_word": false,
20
+ "special": false
21
+ },
22
+ "151645": {
23
+ "content": "<|Assistant|>",
24
+ "lstrip": false,
25
+ "normalized": false,
26
+ "rstrip": false,
27
+ "single_word": false,
28
+ "special": false
29
+ },
30
+ "151646": {
31
+ "content": "<|begin▁of▁sentence|>",
32
+ "lstrip": false,
33
+ "normalized": false,
34
+ "rstrip": false,
35
+ "single_word": false,
36
+ "special": true
37
+ },
38
+ "151647": {
39
+ "content": "<|EOT|>",
40
+ "lstrip": false,
41
+ "normalized": false,
42
+ "rstrip": false,
43
+ "single_word": false,
44
+ "special": false
45
+ },
46
+ "151648": {
47
+ "content": "<think>",
48
+ "lstrip": false,
49
+ "normalized": false,
50
+ "rstrip": false,
51
+ "single_word": false,
52
+ "special": false
53
+ },
54
+ "151649": {
55
+ "content": "</think>",
56
+ "lstrip": false,
57
+ "normalized": false,
58
+ "rstrip": false,
59
+ "single_word": false,
60
+ "special": false
61
+ },
62
+ "151650": {
63
+ "content": "<|quad_start|>",
64
+ "lstrip": false,
65
+ "normalized": false,
66
+ "rstrip": false,
67
+ "single_word": false,
68
+ "special": true
69
+ },
70
+ "151651": {
71
+ "content": "<|quad_end|>",
72
+ "lstrip": false,
73
+ "normalized": false,
74
+ "rstrip": false,
75
+ "single_word": false,
76
+ "special": true
77
+ },
78
+ "151652": {
79
+ "content": "<|vision_start|>",
80
+ "lstrip": false,
81
+ "normalized": false,
82
+ "rstrip": false,
83
+ "single_word": false,
84
+ "special": true
85
+ },
86
+ "151653": {
87
+ "content": "<|vision_end|>",
88
+ "lstrip": false,
89
+ "normalized": false,
90
+ "rstrip": false,
91
+ "single_word": false,
92
+ "special": true
93
+ },
94
+ "151654": {
95
+ "content": "<|vision_pad|>",
96
+ "lstrip": false,
97
+ "normalized": false,
98
+ "rstrip": false,
99
+ "single_word": false,
100
+ "special": true
101
+ },
102
+ "151655": {
103
+ "content": "<|image_pad|>",
104
+ "lstrip": false,
105
+ "normalized": false,
106
+ "rstrip": false,
107
+ "single_word": false,
108
+ "special": true
109
+ },
110
+ "151656": {
111
+ "content": "<|video_pad|>",
112
+ "lstrip": false,
113
+ "normalized": false,
114
+ "rstrip": false,
115
+ "single_word": false,
116
+ "special": true
117
+ },
118
+ "151657": {
119
+ "content": "<tool_call>",
120
+ "lstrip": false,
121
+ "normalized": false,
122
+ "rstrip": false,
123
+ "single_word": false,
124
+ "special": false
125
+ },
126
+ "151658": {
127
+ "content": "</tool_call>",
128
+ "lstrip": false,
129
+ "normalized": false,
130
+ "rstrip": false,
131
+ "single_word": false,
132
+ "special": false
133
+ },
134
+ "151659": {
135
+ "content": "<|fim_prefix|>",
136
+ "lstrip": false,
137
+ "normalized": false,
138
+ "rstrip": false,
139
+ "single_word": false,
140
+ "special": false
141
+ },
142
+ "151660": {
143
+ "content": "<|fim_middle|>",
144
+ "lstrip": false,
145
+ "normalized": false,
146
+ "rstrip": false,
147
+ "single_word": false,
148
+ "special": false
149
+ },
150
+ "151661": {
151
+ "content": "<|fim_suffix|>",
152
+ "lstrip": false,
153
+ "normalized": false,
154
+ "rstrip": false,
155
+ "single_word": false,
156
+ "special": false
157
+ },
158
+ "151662": {
159
+ "content": "<|fim_pad|>",
160
+ "lstrip": false,
161
+ "normalized": false,
162
+ "rstrip": false,
163
+ "single_word": false,
164
+ "special": false
165
+ },
166
+ "151663": {
167
+ "content": "<|repo_name|>",
168
+ "lstrip": false,
169
+ "normalized": false,
170
+ "rstrip": false,
171
+ "single_word": false,
172
+ "special": false
173
+ },
174
+ "151664": {
175
+ "content": "<|file_sep|>",
176
+ "lstrip": false,
177
+ "normalized": false,
178
+ "rstrip": false,
179
+ "single_word": false,
180
+ "special": false
181
+ }
182
+ },
183
+ "bos_token": "<|begin▁of▁sentence|>",
184
+ "chat_template": "{% if not add_generation_prompt is defined %}{% set add_generation_prompt = false %}{% endif %}{% set ns = namespace(is_first=false, is_tool=false, is_output_first=true, system_prompt='') %}{%- for message in messages %}{%- if message['role'] == 'system' %}{% set ns.system_prompt = message['content'] %}{%- endif %}{%- endfor %}{{bos_token}}{{ns.system_prompt}}{%- for message in messages %}{%- if message['role'] == 'user' %}{%- set ns.is_tool = false -%}{{'<|User|>' + message['content']}}{%- endif %}{%- if message['role'] == 'assistant' and message['content'] is none %}{%- set ns.is_tool = false -%}{%- for tool in message['tool_calls']%}{%- if not ns.is_first %}{{'<|Assistant|><|tool▁calls▁begin|><|tool▁call▁begin��>' + tool['type'] + '<|tool▁sep|>' + tool['function']['name'] + '\\n' + '```json' + '\\n' + tool['function']['arguments'] + '\\n' + '```' + '<|tool▁call▁end|>'}}{%- set ns.is_first = true -%}{%- else %}{{'\\n' + '<|tool▁call▁begin|>' + tool['type'] + '<|tool▁sep|>' + tool['function']['name'] + '\\n' + '```json' + '\\n' + tool['function']['arguments'] + '\\n' + '```' + '<|tool▁call▁end|>'}}{{'<|tool▁calls▁end|><|end▁of▁sentence|>'}}{%- endif %}{%- endfor %}{%- endif %}{%- if message['role'] == 'assistant' and message['content'] is not none %}{%- if ns.is_tool %}{{'<|tool▁outputs▁end|>' + message['content'] + '<|end▁of▁sentence|>'}}{%- set ns.is_tool = false -%}{%- else %}{% set content = message['content'] %}{% if '</think>' in content %}{% set content = content.split('</think>')[-1] %}{% endif %}{{'<|Assistant|>' + content + '<|end▁of▁sentence|>'}}{%- endif %}{%- endif %}{%- if message['role'] == 'tool' %}{%- set ns.is_tool = true -%}{%- if ns.is_output_first %}{{'<|tool▁outputs▁begin|><|tool▁output▁begin|>' + message['content'] + '<|tool▁output▁end|>'}}{%- set ns.is_output_first = false %}{%- else %}{{'\\n<|tool▁output▁begin|>' + message['content'] + '<|tool▁output▁end|>'}}{%- endif %}{%- endif %}{%- endfor -%}{% if ns.is_tool %}{{'<|tool▁outputs▁end|>'}}{% endif %}{% if add_generation_prompt and not ns.is_tool %}{{'<|Assistant|><think>\\n'}}{% endif %}",
185
+ "clean_up_tokenization_spaces": false,
186
+ "eos_token": "<|end▁of▁sentence|>",
187
+ "extra_special_tokens": {},
188
+ "legacy": true,
189
+ "max_length": 8096,
190
+ "model_max_length": 16384,
191
+ "pad_token": "<|end▁of▁sentence|>",
192
+ "sp_model_kwargs": {},
193
+ "stride": 0,
194
+ "tokenizer_class": "LlamaTokenizerFast",
195
+ "truncation_side": "right",
196
+ "truncation_strategy": "longest_first",
197
+ "unk_token": null,
198
+ "use_default_system_prompt": false
199
+ }