Upload folder using huggingface_hub
Browse files- .gitattributes +1 -0
- README.md +87 -0
- added_tokens.json +28 -0
- chat_template.jinja +87 -0
- config.json +99 -0
- generation_config.json +12 -0
- merges.txt +0 -0
- model.safetensors +3 -0
- recipe.yaml +31 -0
- special_tokens_map.json +31 -0
- tokenizer.json +3 -0
- tokenizer_config.json +239 -0
- vocab.json +0 -0
.gitattributes
CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
|
|
|
33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
36 |
+
tokenizer.json filter=lfs diff=lfs merge=lfs -text
|
README.md
ADDED
@@ -0,0 +1,87 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: apache-2.0
|
3 |
+
language:
|
4 |
+
- en
|
5 |
+
base_model:
|
6 |
+
- janhq/Jan-v1-4B
|
7 |
+
pipeline_tag: text-generation
|
8 |
+
---
|
9 |
+
# Jan-v1: Advanced Agentic Language Model
|
10 |
+
|
11 |
+
[](https://github.com/menloresearch/deep-research)
|
12 |
+
[](https://opensource.org/licenses/Apache-2.0)
|
13 |
+
[](https://jan.ai/)
|
14 |
+
|
15 |
+
<!-- Optional: If you have a GIF for Jan-v1, include it here like Lucy's. -->
|
16 |
+
<!--  -->
|
17 |
+
|
18 |
+
## Overview
|
19 |
+
**Jan-v1** is the first release in the **Jan Family**, designed for agentic reasoning and problem-solving within the [Jan App](https://jan.ai/). Based on our [**Lucy**](https://huggingface.co/Menlo/Lucy) model, Jan-v1 achieves improved performance through model scaling.
|
20 |
+
|
21 |
+
Jan-v1 uses the [Qwen3-4B-thinking](https://huggingface.co/Qwen/Qwen3-4B-Thinking-2507) model to provide enhanced reasoning capabilities and tool utilization. This architecture delivers better performance on complex agentic tasks.
|
22 |
+
|
23 |
+
## Performance
|
24 |
+
|
25 |
+
### Question Answering (SimpleQA)
|
26 |
+
For question-answering, Jan-v1 shows a significant performance gain from model scaling, achieving 91.1% accuracy.
|
27 |
+
|
28 |
+

|
29 |
+
|
30 |
+
*The 91.1% SimpleQA accuracy represents a significant milestone in factual question answering for models of this scale, demonstrating the effectiveness of our scaling and fine-tuning approach.*
|
31 |
+
|
32 |
+
### Chat Benchmarks
|
33 |
+
|
34 |
+
These benchmarks evaluate the model's conversational and instructional capabilities.
|
35 |
+
|
36 |
+

|
37 |
+
|
38 |
+
## Quick Start
|
39 |
+
|
40 |
+
### Integration with Jan App
|
41 |
+
|
42 |
+
Jan-v1 is optimized for direct integration with the [Jan App](https://jan.ai/). Simply select the model from the Jan App interface for immediate access to its full capabilities.
|
43 |
+
|
44 |
+

|
45 |
+
|
46 |
+
### Local Deployment
|
47 |
+
|
48 |
+
**Using vLLM:**
|
49 |
+
```bash
|
50 |
+
vllm serve janhq/Jan-v1-4B \
|
51 |
+
--host 0.0.0.0 \
|
52 |
+
--port 1234 \
|
53 |
+
--enable-auto-tool-choice \
|
54 |
+
--tool-call-parser hermes
|
55 |
+
|
56 |
+
```
|
57 |
+
|
58 |
+
**Using llama.cpp:**
|
59 |
+
```bash
|
60 |
+
llama-server --model Jan-v1-4B-Q4_K_M.gguf \
|
61 |
+
--host 0.0.0.0 \
|
62 |
+
--port 1234 \
|
63 |
+
--jinja \
|
64 |
+
--no-context-shift
|
65 |
+
```
|
66 |
+
|
67 |
+
### Recommended Parameters
|
68 |
+
|
69 |
+
```yaml
|
70 |
+
temperature: 0.6
|
71 |
+
top_p: 0.95
|
72 |
+
top_k: 20
|
73 |
+
min_p: 0.0
|
74 |
+
max_tokens: 2048
|
75 |
+
```
|
76 |
+
|
77 |
+
|
78 |
+
## 🤝 Community & Support
|
79 |
+
|
80 |
+
- **Discussions**: [HuggingFace Community](https://huggingface.co/janhq/Jan-v1-4B/discussions)
|
81 |
+
- **Jan App**: Learn more about the Jan App at [jan.ai](https://jan.ai/)
|
82 |
+
|
83 |
+
## 📄 Citation
|
84 |
+
```bibtex
|
85 |
+
Updated Soon
|
86 |
+
```
|
87 |
+
---
|
added_tokens.json
ADDED
@@ -0,0 +1,28 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"</think>": 151668,
|
3 |
+
"</tool_call>": 151658,
|
4 |
+
"</tool_response>": 151666,
|
5 |
+
"<think>": 151667,
|
6 |
+
"<tool_call>": 151657,
|
7 |
+
"<tool_response>": 151665,
|
8 |
+
"<|box_end|>": 151649,
|
9 |
+
"<|box_start|>": 151648,
|
10 |
+
"<|endoftext|>": 151643,
|
11 |
+
"<|file_sep|>": 151664,
|
12 |
+
"<|fim_middle|>": 151660,
|
13 |
+
"<|fim_pad|>": 151662,
|
14 |
+
"<|fim_prefix|>": 151659,
|
15 |
+
"<|fim_suffix|>": 151661,
|
16 |
+
"<|im_end|>": 151645,
|
17 |
+
"<|im_start|>": 151644,
|
18 |
+
"<|image_pad|>": 151655,
|
19 |
+
"<|object_ref_end|>": 151647,
|
20 |
+
"<|object_ref_start|>": 151646,
|
21 |
+
"<|quad_end|>": 151651,
|
22 |
+
"<|quad_start|>": 151650,
|
23 |
+
"<|repo_name|>": 151663,
|
24 |
+
"<|video_pad|>": 151656,
|
25 |
+
"<|vision_end|>": 151653,
|
26 |
+
"<|vision_pad|>": 151654,
|
27 |
+
"<|vision_start|>": 151652
|
28 |
+
}
|
chat_template.jinja
ADDED
@@ -0,0 +1,87 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{%- if tools %}
|
2 |
+
{{- '<|im_start|>system\n' }}
|
3 |
+
{%- if messages[0].role == 'system' %}
|
4 |
+
{{- messages[0].content + '\n\n' }}
|
5 |
+
{%- endif %}
|
6 |
+
{{- "In this environment you have access to a set of tools you can use to answer the user's question. You can use one tool per message, and will receive the result of that tool use in the user's response. You use tools step-by-step to accomplish a given task, with each tool use informed by the result of the previous tool use.\n\nTool Use Rules\nHere are the rules you should always follow to solve your task:\n1. Always use the right arguments for the tools. Never use variable names as the action arguments, use the value instead.\n2. Call a tool only when needed: do not call the search agent if you do not need information, try to solve the task yourself.\n3. If no tool call is needed, just answer the question directly.\n4. Never re-do a tool call that you previously did with the exact same parameters.\n5. For tool use, MARK SURE use XML tag format as shown in the examples above. Do not use any other format.\nNow Begin! If you solve the task correctly, you will receive a reward of $1,000,000.\n\n" }}
|
7 |
+
{{- "# Tools\n\nYou may call one or more functions to assist with the user query.\n\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>" }}
|
8 |
+
{%- for tool in tools %}
|
9 |
+
{{- "\n" }}
|
10 |
+
{{- tool | tojson }}
|
11 |
+
{%- endfor %}
|
12 |
+
{{- "\n</tools>\n\nFor each function call, return a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call><|im_end|>\n" }}
|
13 |
+
{%- else %}
|
14 |
+
{%- if messages[0].role == 'system' %}
|
15 |
+
{{- '<|im_start|>system\n' + messages[0].content + '<|im_end|>\n' }}
|
16 |
+
{%- endif %}
|
17 |
+
{%- endif %}
|
18 |
+
{%- set ns = namespace(multi_step_tool=true, last_query_index=messages|length - 1) %}
|
19 |
+
{%- for message in messages[::-1] %}
|
20 |
+
{%- set index = (messages|length - 1) - loop.index0 %}
|
21 |
+
{%- if ns.multi_step_tool and message.role == "user" and message.content is string and not(message.content.startswith('<tool_response>') and message.content.endswith('</tool_response>')) %}
|
22 |
+
{%- set ns.multi_step_tool = false %}
|
23 |
+
{%- set ns.last_query_index = index %}
|
24 |
+
{%- endif %}
|
25 |
+
{%- endfor %}
|
26 |
+
{%- for message in messages %}
|
27 |
+
{%- if message.content is string %}
|
28 |
+
{%- set content = message.content %}
|
29 |
+
{%- else %}
|
30 |
+
{%- set content = '' %}
|
31 |
+
{%- endif %}
|
32 |
+
{%- if (message.role == "user") or (message.role == "system" and not loop.first) %}
|
33 |
+
{{- '<|im_start|>' + message.role + '\n' + content + '<|im_end|>' + '\n' }}
|
34 |
+
{%- elif message.role == "assistant" %}
|
35 |
+
{%- set reasoning_content = '' %}
|
36 |
+
{%- if message.reasoning_content is string %}
|
37 |
+
{%- set reasoning_content = message.reasoning_content %}
|
38 |
+
{%- else %}
|
39 |
+
{%- if '</think>' in content %}
|
40 |
+
{%- set reasoning_content = content.split('</think>')[0].rstrip('\n').split('<think>')[-1].lstrip('\n') %}
|
41 |
+
{%- set content = content.split('</think>')[-1].lstrip('\n') %}
|
42 |
+
{%- endif %}
|
43 |
+
{%- endif %}
|
44 |
+
{%- if loop.index0 > ns.last_query_index %}
|
45 |
+
{%- if loop.last or (not loop.last and reasoning_content) %}
|
46 |
+
{{- '<|im_start|>' + message.role + '\n<think>\n' + reasoning_content.strip('\n') + '\n</think>\n\n' + content.lstrip('\n') }}
|
47 |
+
{%- else %}
|
48 |
+
{{- '<|im_start|>' + message.role + '\n' + content }}
|
49 |
+
{%- endif %}
|
50 |
+
{%- else %}
|
51 |
+
{{- '<|im_start|>' + message.role + '\n' + content }}
|
52 |
+
{%- endif %}
|
53 |
+
{%- if message.tool_calls %}
|
54 |
+
{%- for tool_call in message.tool_calls %}
|
55 |
+
{%- if (loop.first and content) or (not loop.first) %}
|
56 |
+
{{- '\n' }}
|
57 |
+
{%- endif %}
|
58 |
+
{%- if tool_call.function %}
|
59 |
+
{%- set tool_call = tool_call.function %}
|
60 |
+
{%- endif %}
|
61 |
+
{{- '<tool_call>\n{"name": "' }}
|
62 |
+
{{- tool_call.name }}
|
63 |
+
{{- '", "arguments": ' }}
|
64 |
+
{%- if tool_call.arguments is string %}
|
65 |
+
{{- tool_call.arguments }}
|
66 |
+
{%- else %}
|
67 |
+
{{- tool_call.arguments | tojson }}
|
68 |
+
{%- endif %}
|
69 |
+
{{- '}\n</tool_call>' }}
|
70 |
+
{%- endfor %}
|
71 |
+
{%- endif %}
|
72 |
+
{{- '<|im_end|>\n' }}
|
73 |
+
{%- elif message.role == "tool" %}
|
74 |
+
{%- if loop.first or (messages[loop.index0 - 1].role != "tool") %}
|
75 |
+
{{- '<|im_start|>user' }}
|
76 |
+
{%- endif %}
|
77 |
+
{{- '\n<tool_response>\n' }}
|
78 |
+
{{- content }}
|
79 |
+
{{- '\n</tool_response>' }}
|
80 |
+
{%- if loop.last or (messages[loop.index0 + 1].role != "tool") %}
|
81 |
+
{{- '<|im_end|>\n' }}
|
82 |
+
{%- endif %}
|
83 |
+
{%- endif %}
|
84 |
+
{%- endfor %}
|
85 |
+
{%- if add_generation_prompt %}
|
86 |
+
{{- '<|im_start|>assistant\n' }}
|
87 |
+
{%- endif %}
|
config.json
ADDED
@@ -0,0 +1,99 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"architectures": [
|
3 |
+
"Qwen3ForCausalLM"
|
4 |
+
],
|
5 |
+
"attention_bias": false,
|
6 |
+
"attention_dropout": 0.0,
|
7 |
+
"bos_token_id": 151643,
|
8 |
+
"eos_token_id": 151645,
|
9 |
+
"head_dim": 128,
|
10 |
+
"hidden_act": "silu",
|
11 |
+
"hidden_size": 2560,
|
12 |
+
"initializer_range": 0.02,
|
13 |
+
"intermediate_size": 9728,
|
14 |
+
"layer_types": [
|
15 |
+
"full_attention",
|
16 |
+
"full_attention",
|
17 |
+
"full_attention",
|
18 |
+
"full_attention",
|
19 |
+
"full_attention",
|
20 |
+
"full_attention",
|
21 |
+
"full_attention",
|
22 |
+
"full_attention",
|
23 |
+
"full_attention",
|
24 |
+
"full_attention",
|
25 |
+
"full_attention",
|
26 |
+
"full_attention",
|
27 |
+
"full_attention",
|
28 |
+
"full_attention",
|
29 |
+
"full_attention",
|
30 |
+
"full_attention",
|
31 |
+
"full_attention",
|
32 |
+
"full_attention",
|
33 |
+
"full_attention",
|
34 |
+
"full_attention",
|
35 |
+
"full_attention",
|
36 |
+
"full_attention",
|
37 |
+
"full_attention",
|
38 |
+
"full_attention",
|
39 |
+
"full_attention",
|
40 |
+
"full_attention",
|
41 |
+
"full_attention",
|
42 |
+
"full_attention",
|
43 |
+
"full_attention",
|
44 |
+
"full_attention",
|
45 |
+
"full_attention",
|
46 |
+
"full_attention",
|
47 |
+
"full_attention",
|
48 |
+
"full_attention",
|
49 |
+
"full_attention",
|
50 |
+
"full_attention"
|
51 |
+
],
|
52 |
+
"max_position_embeddings": 262144,
|
53 |
+
"max_window_layers": 36,
|
54 |
+
"model_type": "qwen3",
|
55 |
+
"num_attention_heads": 32,
|
56 |
+
"num_hidden_layers": 36,
|
57 |
+
"num_key_value_heads": 8,
|
58 |
+
"quantization_config": {
|
59 |
+
"config_groups": {
|
60 |
+
"group_0": {
|
61 |
+
"input_activations": null,
|
62 |
+
"output_activations": null,
|
63 |
+
"targets": [
|
64 |
+
"Linear"
|
65 |
+
],
|
66 |
+
"weights": {
|
67 |
+
"actorder": null,
|
68 |
+
"block_structure": null,
|
69 |
+
"dynamic": false,
|
70 |
+
"group_size": 128,
|
71 |
+
"num_bits": 4,
|
72 |
+
"observer": "minmax",
|
73 |
+
"observer_kwargs": {},
|
74 |
+
"strategy": "group",
|
75 |
+
"symmetric": true,
|
76 |
+
"type": "int"
|
77 |
+
}
|
78 |
+
}
|
79 |
+
},
|
80 |
+
"format": "pack-quantized",
|
81 |
+
"global_compression_ratio": null,
|
82 |
+
"ignore": [
|
83 |
+
"lm_head"
|
84 |
+
],
|
85 |
+
"kv_cache_scheme": null,
|
86 |
+
"quant_method": "compressed-tensors",
|
87 |
+
"quantization_status": "compressed"
|
88 |
+
},
|
89 |
+
"rms_norm_eps": 1e-06,
|
90 |
+
"rope_scaling": null,
|
91 |
+
"rope_theta": 5000000,
|
92 |
+
"sliding_window": null,
|
93 |
+
"tie_word_embeddings": false,
|
94 |
+
"torch_dtype": "bfloat16",
|
95 |
+
"transformers_version": "4.55.0",
|
96 |
+
"use_cache": false,
|
97 |
+
"use_sliding_window": false,
|
98 |
+
"vocab_size": 151936
|
99 |
+
}
|
generation_config.json
ADDED
@@ -0,0 +1,12 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"_from_model_config": true,
|
3 |
+
"bos_token_id": 151643,
|
4 |
+
"do_sample": true,
|
5 |
+
"eos_token_id": 151645,
|
6 |
+
"min_p": 0.0,
|
7 |
+
"temperature": 0.6,
|
8 |
+
"top_k": 20,
|
9 |
+
"top_p": 0.95,
|
10 |
+
"transformers_version": "4.55.0",
|
11 |
+
"use_cache": false
|
12 |
+
}
|
merges.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|
model.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:cbfc0beab5e69e43ab2623afc27644787760aa28e658b454b2bf2da024dc1d7a
|
3 |
+
size 3429751984
|
recipe.yaml
ADDED
@@ -0,0 +1,31 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
default_stage:
|
2 |
+
default_modifiers:
|
3 |
+
AWQModifier:
|
4 |
+
config_groups:
|
5 |
+
group_0:
|
6 |
+
targets: [Linear]
|
7 |
+
weights:
|
8 |
+
num_bits: 4
|
9 |
+
type: int
|
10 |
+
symmetric: true
|
11 |
+
group_size: 128
|
12 |
+
strategy: group
|
13 |
+
block_structure: null
|
14 |
+
dynamic: false
|
15 |
+
actorder: null
|
16 |
+
observer: minmax
|
17 |
+
observer_kwargs: {}
|
18 |
+
input_activations: null
|
19 |
+
output_activations: null
|
20 |
+
targets: [Linear]
|
21 |
+
ignore: [lm_head]
|
22 |
+
mappings:
|
23 |
+
- smooth_layer: re:.*input_layernorm$
|
24 |
+
balance_layers: ['re:.*q_proj$', 're:.*k_proj$', 're:.*v_proj$']
|
25 |
+
- smooth_layer: re:.*v_proj$
|
26 |
+
balance_layers: ['re:.*o_proj$']
|
27 |
+
- smooth_layer: re:.*post_attention_layernorm$
|
28 |
+
balance_layers: ['re:.*gate_proj$', 're:.*up_proj$']
|
29 |
+
- smooth_layer: re:.*up_proj$
|
30 |
+
balance_layers: ['re:.*down_proj$']
|
31 |
+
duo_scaling: true
|
special_tokens_map.json
ADDED
@@ -0,0 +1,31 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"additional_special_tokens": [
|
3 |
+
"<|im_start|>",
|
4 |
+
"<|im_end|>",
|
5 |
+
"<|object_ref_start|>",
|
6 |
+
"<|object_ref_end|>",
|
7 |
+
"<|box_start|>",
|
8 |
+
"<|box_end|>",
|
9 |
+
"<|quad_start|>",
|
10 |
+
"<|quad_end|>",
|
11 |
+
"<|vision_start|>",
|
12 |
+
"<|vision_end|>",
|
13 |
+
"<|vision_pad|>",
|
14 |
+
"<|image_pad|>",
|
15 |
+
"<|video_pad|>"
|
16 |
+
],
|
17 |
+
"eos_token": {
|
18 |
+
"content": "<|im_end|>",
|
19 |
+
"lstrip": false,
|
20 |
+
"normalized": false,
|
21 |
+
"rstrip": false,
|
22 |
+
"single_word": false
|
23 |
+
},
|
24 |
+
"pad_token": {
|
25 |
+
"content": "<|endoftext|>",
|
26 |
+
"lstrip": false,
|
27 |
+
"normalized": false,
|
28 |
+
"rstrip": false,
|
29 |
+
"single_word": false
|
30 |
+
}
|
31 |
+
}
|
tokenizer.json
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:aeb13307a71acd8fe81861d94ad54ab689df773318809eed3cbe794b4492dae4
|
3 |
+
size 11422654
|
tokenizer_config.json
ADDED
@@ -0,0 +1,239 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"add_bos_token": false,
|
3 |
+
"add_prefix_space": false,
|
4 |
+
"added_tokens_decoder": {
|
5 |
+
"151643": {
|
6 |
+
"content": "<|endoftext|>",
|
7 |
+
"lstrip": false,
|
8 |
+
"normalized": false,
|
9 |
+
"rstrip": false,
|
10 |
+
"single_word": false,
|
11 |
+
"special": true
|
12 |
+
},
|
13 |
+
"151644": {
|
14 |
+
"content": "<|im_start|>",
|
15 |
+
"lstrip": false,
|
16 |
+
"normalized": false,
|
17 |
+
"rstrip": false,
|
18 |
+
"single_word": false,
|
19 |
+
"special": true
|
20 |
+
},
|
21 |
+
"151645": {
|
22 |
+
"content": "<|im_end|>",
|
23 |
+
"lstrip": false,
|
24 |
+
"normalized": false,
|
25 |
+
"rstrip": false,
|
26 |
+
"single_word": false,
|
27 |
+
"special": true
|
28 |
+
},
|
29 |
+
"151646": {
|
30 |
+
"content": "<|object_ref_start|>",
|
31 |
+
"lstrip": false,
|
32 |
+
"normalized": false,
|
33 |
+
"rstrip": false,
|
34 |
+
"single_word": false,
|
35 |
+
"special": true
|
36 |
+
},
|
37 |
+
"151647": {
|
38 |
+
"content": "<|object_ref_end|>",
|
39 |
+
"lstrip": false,
|
40 |
+
"normalized": false,
|
41 |
+
"rstrip": false,
|
42 |
+
"single_word": false,
|
43 |
+
"special": true
|
44 |
+
},
|
45 |
+
"151648": {
|
46 |
+
"content": "<|box_start|>",
|
47 |
+
"lstrip": false,
|
48 |
+
"normalized": false,
|
49 |
+
"rstrip": false,
|
50 |
+
"single_word": false,
|
51 |
+
"special": true
|
52 |
+
},
|
53 |
+
"151649": {
|
54 |
+
"content": "<|box_end|>",
|
55 |
+
"lstrip": false,
|
56 |
+
"normalized": false,
|
57 |
+
"rstrip": false,
|
58 |
+
"single_word": false,
|
59 |
+
"special": true
|
60 |
+
},
|
61 |
+
"151650": {
|
62 |
+
"content": "<|quad_start|>",
|
63 |
+
"lstrip": false,
|
64 |
+
"normalized": false,
|
65 |
+
"rstrip": false,
|
66 |
+
"single_word": false,
|
67 |
+
"special": true
|
68 |
+
},
|
69 |
+
"151651": {
|
70 |
+
"content": "<|quad_end|>",
|
71 |
+
"lstrip": false,
|
72 |
+
"normalized": false,
|
73 |
+
"rstrip": false,
|
74 |
+
"single_word": false,
|
75 |
+
"special": true
|
76 |
+
},
|
77 |
+
"151652": {
|
78 |
+
"content": "<|vision_start|>",
|
79 |
+
"lstrip": false,
|
80 |
+
"normalized": false,
|
81 |
+
"rstrip": false,
|
82 |
+
"single_word": false,
|
83 |
+
"special": true
|
84 |
+
},
|
85 |
+
"151653": {
|
86 |
+
"content": "<|vision_end|>",
|
87 |
+
"lstrip": false,
|
88 |
+
"normalized": false,
|
89 |
+
"rstrip": false,
|
90 |
+
"single_word": false,
|
91 |
+
"special": true
|
92 |
+
},
|
93 |
+
"151654": {
|
94 |
+
"content": "<|vision_pad|>",
|
95 |
+
"lstrip": false,
|
96 |
+
"normalized": false,
|
97 |
+
"rstrip": false,
|
98 |
+
"single_word": false,
|
99 |
+
"special": true
|
100 |
+
},
|
101 |
+
"151655": {
|
102 |
+
"content": "<|image_pad|>",
|
103 |
+
"lstrip": false,
|
104 |
+
"normalized": false,
|
105 |
+
"rstrip": false,
|
106 |
+
"single_word": false,
|
107 |
+
"special": true
|
108 |
+
},
|
109 |
+
"151656": {
|
110 |
+
"content": "<|video_pad|>",
|
111 |
+
"lstrip": false,
|
112 |
+
"normalized": false,
|
113 |
+
"rstrip": false,
|
114 |
+
"single_word": false,
|
115 |
+
"special": true
|
116 |
+
},
|
117 |
+
"151657": {
|
118 |
+
"content": "<tool_call>",
|
119 |
+
"lstrip": false,
|
120 |
+
"normalized": false,
|
121 |
+
"rstrip": false,
|
122 |
+
"single_word": false,
|
123 |
+
"special": false
|
124 |
+
},
|
125 |
+
"151658": {
|
126 |
+
"content": "</tool_call>",
|
127 |
+
"lstrip": false,
|
128 |
+
"normalized": false,
|
129 |
+
"rstrip": false,
|
130 |
+
"single_word": false,
|
131 |
+
"special": false
|
132 |
+
},
|
133 |
+
"151659": {
|
134 |
+
"content": "<|fim_prefix|>",
|
135 |
+
"lstrip": false,
|
136 |
+
"normalized": false,
|
137 |
+
"rstrip": false,
|
138 |
+
"single_word": false,
|
139 |
+
"special": false
|
140 |
+
},
|
141 |
+
"151660": {
|
142 |
+
"content": "<|fim_middle|>",
|
143 |
+
"lstrip": false,
|
144 |
+
"normalized": false,
|
145 |
+
"rstrip": false,
|
146 |
+
"single_word": false,
|
147 |
+
"special": false
|
148 |
+
},
|
149 |
+
"151661": {
|
150 |
+
"content": "<|fim_suffix|>",
|
151 |
+
"lstrip": false,
|
152 |
+
"normalized": false,
|
153 |
+
"rstrip": false,
|
154 |
+
"single_word": false,
|
155 |
+
"special": false
|
156 |
+
},
|
157 |
+
"151662": {
|
158 |
+
"content": "<|fim_pad|>",
|
159 |
+
"lstrip": false,
|
160 |
+
"normalized": false,
|
161 |
+
"rstrip": false,
|
162 |
+
"single_word": false,
|
163 |
+
"special": false
|
164 |
+
},
|
165 |
+
"151663": {
|
166 |
+
"content": "<|repo_name|>",
|
167 |
+
"lstrip": false,
|
168 |
+
"normalized": false,
|
169 |
+
"rstrip": false,
|
170 |
+
"single_word": false,
|
171 |
+
"special": false
|
172 |
+
},
|
173 |
+
"151664": {
|
174 |
+
"content": "<|file_sep|>",
|
175 |
+
"lstrip": false,
|
176 |
+
"normalized": false,
|
177 |
+
"rstrip": false,
|
178 |
+
"single_word": false,
|
179 |
+
"special": false
|
180 |
+
},
|
181 |
+
"151665": {
|
182 |
+
"content": "<tool_response>",
|
183 |
+
"lstrip": false,
|
184 |
+
"normalized": false,
|
185 |
+
"rstrip": false,
|
186 |
+
"single_word": false,
|
187 |
+
"special": false
|
188 |
+
},
|
189 |
+
"151666": {
|
190 |
+
"content": "</tool_response>",
|
191 |
+
"lstrip": false,
|
192 |
+
"normalized": false,
|
193 |
+
"rstrip": false,
|
194 |
+
"single_word": false,
|
195 |
+
"special": false
|
196 |
+
},
|
197 |
+
"151667": {
|
198 |
+
"content": "<think>",
|
199 |
+
"lstrip": false,
|
200 |
+
"normalized": false,
|
201 |
+
"rstrip": false,
|
202 |
+
"single_word": false,
|
203 |
+
"special": false
|
204 |
+
},
|
205 |
+
"151668": {
|
206 |
+
"content": "</think>",
|
207 |
+
"lstrip": false,
|
208 |
+
"normalized": false,
|
209 |
+
"rstrip": false,
|
210 |
+
"single_word": false,
|
211 |
+
"special": false
|
212 |
+
}
|
213 |
+
},
|
214 |
+
"additional_special_tokens": [
|
215 |
+
"<|im_start|>",
|
216 |
+
"<|im_end|>",
|
217 |
+
"<|object_ref_start|>",
|
218 |
+
"<|object_ref_end|>",
|
219 |
+
"<|box_start|>",
|
220 |
+
"<|box_end|>",
|
221 |
+
"<|quad_start|>",
|
222 |
+
"<|quad_end|>",
|
223 |
+
"<|vision_start|>",
|
224 |
+
"<|vision_end|>",
|
225 |
+
"<|vision_pad|>",
|
226 |
+
"<|image_pad|>",
|
227 |
+
"<|video_pad|>"
|
228 |
+
],
|
229 |
+
"bos_token": null,
|
230 |
+
"clean_up_tokenization_spaces": false,
|
231 |
+
"eos_token": "<|im_end|>",
|
232 |
+
"errors": "replace",
|
233 |
+
"extra_special_tokens": {},
|
234 |
+
"model_max_length": 262144,
|
235 |
+
"pad_token": "<|endoftext|>",
|
236 |
+
"split_special_tokens": false,
|
237 |
+
"tokenizer_class": "Qwen2Tokenizer",
|
238 |
+
"unk_token": null
|
239 |
+
}
|
vocab.json
ADDED
The diff for this file is too large to render.
See raw diff
|
|