2reb commited on
Commit
414ce6d
·
verified ·
1 Parent(s): c3d3ed8

Upload TinyToolUse Calculator model

Browse files
.gitattributes CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ tokenizer.json filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,132 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ base_model: Qwen/Qwen2-0.5B
4
+ tags:
5
+ - tool-use
6
+ - function-calling
7
+ - calculator
8
+ - tiny-tool-use
9
+ - bagel-labs
10
+ - fine-tuned
11
+ language:
12
+ - en
13
+ pipeline_tag: text-generation
14
+ library_name: transformers
15
+ ---
16
+
17
+ # TinyToolUse-Qwen2-0.5B-Calculator
18
+
19
+ A fine-tuned version of Qwen/Qwen2-0.5B trained for calculator tool usage using the [Tiny Tool Use](https://github.com/bagel-org/bagel-RL) library.
20
+
21
+ ## Model Description
22
+
23
+ This model has been fine-tuned to understand and execute calculator tool calls. It can perform mathematical calculations by generating appropriate tool call syntax.
24
+
25
+ - **Base Model**: Qwen/Qwen2-0.5B
26
+ - **Training Method**: Supervised Fine-Tuning (SFT)
27
+ - **Training Library**: Tiny Tool Use
28
+ - **Tool**: Calculator (mathematical expressions)
29
+ - **Parameters**: 500M
30
+ - **Architecture**: 24 layers, 896 hidden dimensions
31
+
32
+ ## Training Details
33
+
34
+ - **Training Data**: 4 custom calculator examples
35
+ - **Training Steps**: 3 steps, 1 epoch
36
+ - **Training Time**: ~8.5 seconds
37
+ - **Hardware**: CPU training (RTX 4060 available but CUDA compatibility issues)
38
+ - **Precision**: float32
39
+
40
+ ## Usage
41
+
42
+ ```python
43
+ from transformers import AutoTokenizer, AutoModelForCausalLM
44
+ import torch
45
+
46
+ # Load model and tokenizer
47
+ tokenizer = AutoTokenizer.from_pretrained("your-username/TinyToolUse-Qwen2-0.5B-Calculator")
48
+ model = AutoModelForCausalLM.from_pretrained("your-username/TinyToolUse-Qwen2-0.5B-Calculator")
49
+
50
+ # Example usage
51
+ prompt = "Human: What is 15 + 27?\nAssistant:"
52
+ inputs = tokenizer(prompt, return_tensors="pt")
53
+
54
+ with torch.no_grad():
55
+ outputs = model.generate(
56
+ inputs.input_ids,
57
+ max_new_tokens=100,
58
+ temperature=0.7,
59
+ do_sample=True,
60
+ pad_token_id=tokenizer.eos_token_id
61
+ )
62
+
63
+ response = tokenizer.decode(outputs[0], skip_special_tokens=True)
64
+ print(response)
65
+ ```
66
+
67
+ ## Expected Output Format
68
+
69
+ The model should generate responses in the format:
70
+ ```
71
+ tool_code: print(calculator(expression='15 + 27'))
72
+ ```
73
+
74
+ ## Tool Definition
75
+
76
+ The model was trained with this calculator tool definition:
77
+
78
+ ```json
79
+ {
80
+ "name": "calculator",
81
+ "description": "Perform mathematical calculations",
82
+ "type": "function",
83
+ "parameters": {
84
+ "type": "object",
85
+ "properties": {
86
+ "expression": {
87
+ "type": "string",
88
+ "description": "Mathematical expression to evaluate"
89
+ }
90
+ },
91
+ "required": ["expression"]
92
+ }
93
+ }
94
+ ```
95
+
96
+ ## Training Examples
97
+
98
+ The model was trained on these examples:
99
+ - "What is 2 + 2?" → `tool_code: print(calculator(expression='2 + 2'))`
100
+ - "Calculate 10 * 5" → `tool_code: print(calculator(expression='10 * 5'))`
101
+ - "What is 100 / 25?" → `tool_code: print(calculator(expression='100 / 25'))`
102
+ - "Find the value of 3 to the power of 4." → `tool_code: print(calculator(expression='3 ** 4'))`
103
+
104
+ ## Limitations
105
+
106
+ - Small training dataset (4 examples)
107
+ - CPU-only training due to CUDA compatibility issues
108
+ - Limited to calculator tool only
109
+ - May require additional fine-tuning for production use
110
+
111
+ ## Framework
112
+
113
+ This model was trained using the [Tiny Tool Use](https://github.com/bagel-org/bagel-RL) library, which provides:
114
+ - Multiple training methods (SFT, DPO, Teacher Mode)
115
+ - Flexible data generation strategies
116
+ - Integration with Berkeley Function-Calling Leaderboard
117
+ - Support for various LLM architectures
118
+
119
+ ## Citation
120
+
121
+ ```bibtex
122
+ @misc{tinytooluse2024,
123
+ title={Tiny Tool Use: Training Open-Source LLMs for Tool Usage},
124
+ author={Bagel Labs},
125
+ year={2024},
126
+ url={https://github.com/bagel-org/bagel-RL}
127
+ }
128
+ ```
129
+
130
+ ## License
131
+
132
+ MIT License - see the [LICENSE](LICENSE) file for details.
added_tokens.json ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "<|endoftext|>": 151643,
3
+ "<|im_end|>": 151645,
4
+ "<|im_start|>": 151644,
5
+ "[/RESULT]": 151649,
6
+ "[/TOOL_CALL]": 151647,
7
+ "[RESULT]": 151648,
8
+ "[TOOL_CALL]": 151646
9
+ }
config.json ADDED
@@ -0,0 +1,54 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "Qwen2ForCausalLM"
4
+ ],
5
+ "attention_dropout": 0.0,
6
+ "bos_token_id": 151643,
7
+ "eos_token_id": 151643,
8
+ "hidden_act": "silu",
9
+ "hidden_size": 896,
10
+ "initializer_range": 0.02,
11
+ "intermediate_size": 4864,
12
+ "layer_types": [
13
+ "full_attention",
14
+ "full_attention",
15
+ "full_attention",
16
+ "full_attention",
17
+ "full_attention",
18
+ "full_attention",
19
+ "full_attention",
20
+ "full_attention",
21
+ "full_attention",
22
+ "full_attention",
23
+ "full_attention",
24
+ "full_attention",
25
+ "full_attention",
26
+ "full_attention",
27
+ "full_attention",
28
+ "full_attention",
29
+ "full_attention",
30
+ "full_attention",
31
+ "full_attention",
32
+ "full_attention",
33
+ "full_attention",
34
+ "full_attention",
35
+ "full_attention",
36
+ "full_attention"
37
+ ],
38
+ "max_position_embeddings": 131072,
39
+ "max_window_layers": 24,
40
+ "model_type": "qwen2",
41
+ "num_attention_heads": 14,
42
+ "num_hidden_layers": 24,
43
+ "num_key_value_heads": 2,
44
+ "rms_norm_eps": 1e-06,
45
+ "rope_scaling": null,
46
+ "rope_theta": 1000000.0,
47
+ "sliding_window": null,
48
+ "tie_word_embeddings": true,
49
+ "torch_dtype": "float32",
50
+ "transformers_version": "4.53.1",
51
+ "use_cache": true,
52
+ "use_sliding_window": false,
53
+ "vocab_size": 151650
54
+ }
generation_config.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token_id": 151643,
3
+ "eos_token_id": 151643,
4
+ "max_new_tokens": 2048,
5
+ "transformers_version": "4.53.1"
6
+ }
merges.txt ADDED
The diff for this file is too large to render. See raw diff
 
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9240ca07d4de4d80ec1416a429c03244023fc69ca19cd157014357824aea6016
3
+ size 1975138448
model_card.json ADDED
@@ -0,0 +1,17 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "license": "mit",
3
+ "base_model": "Qwen/Qwen2-0.5B",
4
+ "tags": [
5
+ "tool-use",
6
+ "function-calling",
7
+ "calculator",
8
+ "tiny-tool-use",
9
+ "bagel-labs",
10
+ "fine-tuned"
11
+ ],
12
+ "language": [
13
+ "en"
14
+ ],
15
+ "pipeline_tag": "text-generation",
16
+ "library_name": "transformers"
17
+ }
requirements.txt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ torch>=2.0.0
2
+ transformers>=4.30.0
3
+ tokenizers>=0.13.0
special_tokens_map.json ADDED
@@ -0,0 +1,46 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "additional_special_tokens": [
3
+ {
4
+ "content": "[TOOL_CALL]",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false
9
+ },
10
+ {
11
+ "content": "[/TOOL_CALL]",
12
+ "lstrip": false,
13
+ "normalized": false,
14
+ "rstrip": false,
15
+ "single_word": false
16
+ },
17
+ {
18
+ "content": "[RESULT]",
19
+ "lstrip": false,
20
+ "normalized": false,
21
+ "rstrip": false,
22
+ "single_word": false
23
+ },
24
+ {
25
+ "content": "[/RESULT]",
26
+ "lstrip": false,
27
+ "normalized": false,
28
+ "rstrip": false,
29
+ "single_word": false
30
+ }
31
+ ],
32
+ "eos_token": {
33
+ "content": "<|endoftext|>",
34
+ "lstrip": false,
35
+ "normalized": false,
36
+ "rstrip": false,
37
+ "single_word": false
38
+ },
39
+ "pad_token": {
40
+ "content": "<|endoftext|>",
41
+ "lstrip": false,
42
+ "normalized": false,
43
+ "rstrip": false,
44
+ "single_word": false
45
+ }
46
+ }
tokenizer.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1613481d58c0c98f4a249f3d850f893901460e06569015261ee26b1ea8f0ea69
3
+ size 11419014
tokenizer_config.json ADDED
@@ -0,0 +1,77 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_prefix_space": false,
3
+ "added_tokens_decoder": {
4
+ "151643": {
5
+ "content": "<|endoftext|>",
6
+ "lstrip": false,
7
+ "normalized": false,
8
+ "rstrip": false,
9
+ "single_word": false,
10
+ "special": true
11
+ },
12
+ "151644": {
13
+ "content": "<|im_start|>",
14
+ "lstrip": false,
15
+ "normalized": false,
16
+ "rstrip": false,
17
+ "single_word": false,
18
+ "special": true
19
+ },
20
+ "151645": {
21
+ "content": "<|im_end|>",
22
+ "lstrip": false,
23
+ "normalized": false,
24
+ "rstrip": false,
25
+ "single_word": false,
26
+ "special": true
27
+ },
28
+ "151646": {
29
+ "content": "[TOOL_CALL]",
30
+ "lstrip": false,
31
+ "normalized": false,
32
+ "rstrip": false,
33
+ "single_word": false,
34
+ "special": true
35
+ },
36
+ "151647": {
37
+ "content": "[/TOOL_CALL]",
38
+ "lstrip": false,
39
+ "normalized": false,
40
+ "rstrip": false,
41
+ "single_word": false,
42
+ "special": true
43
+ },
44
+ "151648": {
45
+ "content": "[RESULT]",
46
+ "lstrip": false,
47
+ "normalized": false,
48
+ "rstrip": false,
49
+ "single_word": false,
50
+ "special": true
51
+ },
52
+ "151649": {
53
+ "content": "[/RESULT]",
54
+ "lstrip": false,
55
+ "normalized": false,
56
+ "rstrip": false,
57
+ "single_word": false,
58
+ "special": true
59
+ }
60
+ },
61
+ "additional_special_tokens": [
62
+ "[TOOL_CALL]",
63
+ "[/TOOL_CALL]",
64
+ "[RESULT]",
65
+ "[/RESULT]"
66
+ ],
67
+ "bos_token": null,
68
+ "clean_up_tokenization_spaces": false,
69
+ "eos_token": "<|endoftext|>",
70
+ "errors": "replace",
71
+ "extra_special_tokens": {},
72
+ "model_max_length": 32768,
73
+ "pad_token": "<|endoftext|>",
74
+ "split_special_tokens": false,
75
+ "tokenizer_class": "Qwen2Tokenizer",
76
+ "unk_token": null
77
+ }
vocab.json ADDED
The diff for this file is too large to render. See raw diff