anemll commited on
Commit
ca8c6c5
·
verified ·
1 Parent(s): 9462c25

Upload folder using huggingface_hub

Browse files
.DS_Store ADDED
Binary file (6.15 kB). View file
 
README.md ADDED
@@ -0,0 +1,152 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ tags:
4
+ - coreml
5
+ - ANE
6
+ - LLaMA
7
+ - Qwen
8
+ - DeepSeek
9
+ - Apple
10
+ - Apple Neural Engine
11
+ - DeepHermes
12
+ ---
13
+ # ANEMLL
14
+
15
+ **ANEMLL** (pronounced like "animal") is an open-source project focused on accelerating the porting of Large Language Models (LLMs) to tensor processors, starting with the Apple Neural Engine (ANE).
16
+
17
+ The goal is to provide a fully open-source pipeline from model conversion to inference for common LLM architectures running on ANE.
18
+
19
+ This enables seamless integration and on-device inference for low-power applications on edge devices, ensuring maximum privacy and security.
20
+
21
+ This is critical for autonomous applications, where models run directly on the device without requiring an internet connection.
22
+
23
+ For more information, visit the [ANEMLL GitHub repository](https://github.com/anemll/anemll).
24
+
25
+
26
+ ---
27
+
28
+ ## License
29
+
30
+ ANEMLL is licensed under the [MIT License](https://opensource.org/license/mit).
31
+ The original model may require a separate license depending on the architecture:
32
+ - LLaMA models: Based on Meta's LLaMA and may require Meta's license
33
+ - Qwen models: Based on Alibaba's Qwen and may require Alibaba's license
34
+ - Other models: Check respective original model licenses
35
+
36
+ This model is converted for CoreML using ANEMLL's open-source conversion pipeline. It supports multiple LLM architectures including LLaMA, Qwen, and DeepSeek variants.
37
+
38
+ ---
39
+
40
+ ## Requirements
41
+
42
+ - **macOS Sequoia** with Apple Neural Engine and 8GB RAM or more
43
+ - **CoreML Tools** and **HuggingFace Transformers** libraries
44
+ - **Python 3.9**
45
+
46
+ `chat.py` provides a sample inference script.
47
+ `chat_full.py` provides a sample inference script with history and conversation management.
48
+
49
+ **Installation**
50
+
51
+ 1. Download the model from Hugging Face:
52
+ ```bash
53
+ # Install required tools
54
+ pip install huggingface_hub
55
+
56
+ # Install Git LFS (Large File Support)
57
+ # macOS with Homebrew:
58
+ brew install git-lfs
59
+ # Or Ubuntu/Debian:
60
+ # sudo apt-get install git-lfs
61
+
62
+ # Initialize Git LFS
63
+ git lfs install
64
+
65
+ # Clone the repository with model files
66
+ git clone https://huggingface.co/anemll/anemll-meta-llama-Llama-3.2-1B-Instruct-ctx1024_0.3.4
67
+ ```
68
+
69
+ 2. Extract model files:
70
+ ```bash
71
+ # Navigate to cloned directory
72
+ cd anemll-meta-llama-Llama-3.2-1B-Instruct-ctx1024_0.3.4
73
+
74
+ # Pull LFS files (model weights)
75
+ git lfs pull
76
+
77
+ # Extract CoreML model files
78
+ find . -type f -name "*.zip" -exec unzip {} \;
79
+ ```
80
+
81
+ 3. Install dependencies:
82
+ ```bash
83
+ pip install coremltools transformers
84
+ ```
85
+
86
+ **Coremltools:**
87
+
88
+ See coremltools installation guide at https://coremltools.readme.io/v4.0/docs/installation
89
+
90
+ **How to Run**
91
+
92
+ 1. Basic chat interface:
93
+ ```bash
94
+ python chat.py --meta ./meta.yaml
95
+ ```
96
+
97
+ 2. Full conversation mode with history:
98
+ ```bash
99
+ python chat_full.py --meta ./meta.yaml
100
+ ```
101
+
102
+ > Note: The first time the model loads, macOS will take some time to place it on the device.
103
+ > Subsequent loads will be instantaneous.
104
+ > Use Ctrl-D to exit, Ctrl-C to interrupt inference.
105
+
106
+ **More Info**
107
+ Please check following links for later updates:
108
+
109
+ * [GitHub](https://github.com/anemll)
110
+ * [Hugging Face Models](https://huggingface.co/anemll)
111
+ * [Twitter/X](https://x.com/anemll)
112
+ * [Website](https://anemll.com)
113
+
114
+
115
116
+
117
+ # anemll-meta-llama-Llama-3.2-1B-Instruct-ctx1024_0.3.4
118
+
119
+ This is a CoreML model converted using ANEMLL for Apple Neural Engine inference.
120
+
121
+ ## Available Distributions
122
+
123
+ ### Standard Distribution
124
+ - Contains zipped MLMODELC files
125
+ - Suitable for macOS and development
126
+
127
+ ### iOS Distribution
128
+ - Contains unzipped MLMODELC files
129
+ - Ready for iOS deployment
130
+ - Includes offline tokenizer support
131
+
132
+ ## Model Information
133
+ - Context Length: 1024
134
+ - Batch Size: 64
135
+ - Number of Chunks: 1
136
+
137
+ ## Quick Start
138
+
139
+ ### Test in iOS/macOS App
140
+ Try our sample Chat-Bot app on TestFlight:
141
+ 1. Install TestFlight from App Store
142
+ 2. Join beta test: [TestFlight Link](https://testflight.apple.com/join/jrQq1D1C)
143
+ 3. App includes a small demo model pre-installed
144
+ 4. You can add custom models via HuggingFace URLs
145
+
146
+ > [!Note]
147
+ > - The TestFlight app works on both iOS and macOS
148
+ > - Demonstrates proper model integration and provides a reference implementation
149
+ > - iOS requires unzipped MLMODELC files and config.json for offline tokenizer
150
+ > - macOS supports both zipped and unzipped model formats
151
+
152
+ ```
chat.py ADDED
@@ -0,0 +1,1071 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # chat.py
2
+ #!/usr/bin/env python3
3
+ # chat.py
4
+ # Copyright (c) 2025 Anemll
5
+ # Licensed under the MIT License
6
+
7
+ import argparse
8
+ import os
9
+ import re
10
+ import glob
11
+ from pathlib import Path
12
+ import coremltools as ct
13
+ from transformers import LlamaTokenizer, AutoTokenizer
14
+ import torch
15
+ import torch.nn.functional as F
16
+ import numpy as np
17
+ import queue
18
+ import threading
19
+ import time
20
+ import yaml
21
+ import sys
22
+
23
+ # ANSI color codes
24
+ LIGHT_BLUE = "\033[94m"
25
+ DARK_BLUE = "\033[34m"
26
+ LIGHT_GREEN = "\033[92m"
27
+ RESET_COLOR = "\033[0m"
28
+
29
+ # Add at top with other constants
30
+ WARMUP_TOKEN_LIMIT = 10 # Maximum tokens to generate during warmup
31
+
32
+ class TokenPrinter:
33
+ """Handles background printing of generated tokens."""
34
+ def __init__(self, tokenizer):
35
+ self.tokenizer = tokenizer
36
+ self.token_queue = queue.Queue()
37
+ self.stop_event = threading.Event()
38
+ self.thread = None
39
+ self.buffer = ""
40
+ self.lock = threading.Lock()
41
+ self.thinking = True # Track if we're still in thinking mode
42
+ self.decoding_buffer = [] # Buffer for token IDs
43
+ # Add token counting and timing
44
+ self.start_time = time.time()
45
+ self.token_count = 0
46
+ self.start()
47
+
48
+ def start(self):
49
+ """Start the printer thread."""
50
+ if self.thread is None:
51
+ self.thread = threading.Thread(target=self._print_worker)
52
+ self.thread.daemon = True
53
+ self.thread.start()
54
+
55
+ def add_token(self, token_id):
56
+ """Add a token to the print queue."""
57
+ if not self.stop_event.is_set():
58
+ self.token_queue.put(token_id)
59
+ self.token_count += 1
60
+
61
+ def drain_buffer(self, eval_mode=False):
62
+ """Decode token IDs from decoding_buffer in the main thread."""
63
+ if not self.decoding_buffer:
64
+ return
65
+
66
+ # Decode all tokens at once in the main thread
67
+ token_str = self.tokenizer.decode(self.decoding_buffer)
68
+ self.decoding_buffer.clear()
69
+
70
+ # Store the text in buffer for later saving to file
71
+ with self.lock:
72
+ self.buffer += token_str
73
+
74
+ # Skip printing in eval mode
75
+ if eval_mode:
76
+ return
77
+
78
+ # Color-handling logic
79
+ if self.thinking and "</think>" in token_str:
80
+ self.thinking = False
81
+ parts = token_str.split("</think>")
82
+ if len(parts) > 0:
83
+ print(parts[0] + "</think>", end='', flush=True)
84
+ if len(parts) > 1:
85
+ print(LIGHT_BLUE + parts[1], end='', flush=True)
86
+ else:
87
+ if not self.thinking:
88
+ print(LIGHT_BLUE + token_str, end='', flush=True)
89
+ else:
90
+ print(token_str, end='', flush=True)
91
+
92
+ def _print_worker(self):
93
+ """Worker thread that takes token_ids from the queue."""
94
+ while not self.stop_event.is_set():
95
+ try:
96
+ token_id = self.token_queue.get(timeout=0.01)
97
+ with self.lock:
98
+ self.decoding_buffer.append(token_id)
99
+ self.token_queue.task_done()
100
+ except queue.Empty:
101
+ continue
102
+ except Exception as e:
103
+ print(f"\nError: Token printer error: {str(e)}")
104
+ break
105
+
106
+ def stop(self, eval_mode=False):
107
+ """Stop the printer thread."""
108
+ if self.thread and self.thread.is_alive():
109
+ # Ensure any remaining tokens are processed
110
+ self.drain_buffer()
111
+ self.stop_event.set()
112
+ try:
113
+ self.thread.join(timeout=1.0)
114
+ except Exception:
115
+ pass
116
+ # Calculate and print tokens/s with shorter format in blue (unless in eval mode)
117
+ if not eval_mode:
118
+ elapsed = time.time() - self.start_time
119
+ if elapsed > 0 and self.token_count > 0:
120
+ tokens_per_sec = self.token_count / elapsed
121
+ print(f"\n{DARK_BLUE}{tokens_per_sec:.1f} t/s{RESET_COLOR}")
122
+ else:
123
+ print(RESET_COLOR) # Reset color at the end
124
+ return self.buffer
125
+
126
+ def parse_model_path(path):
127
+ """Parse model path and return full path with .mlmodelc or .mlpackage extension."""
128
+ path = Path(path)
129
+
130
+ # If path exists exactly as specified, return it
131
+ if path.exists():
132
+ return str(path)
133
+
134
+ # Try with both extensions
135
+ candidates = [
136
+ path, # Original path
137
+ path.with_suffix('.mlmodelc'), # With .mlmodelc
138
+ path.with_suffix('.mlpackage'), # With .mlpackage
139
+ Path(str(path) + '.mlmodelc'), # Handle case where extension is included
140
+ Path(str(path) + '.mlpackage')
141
+ ]
142
+
143
+ # Try all possible paths
144
+ for candidate in candidates:
145
+ if candidate.exists():
146
+ return str(candidate)
147
+
148
+ # If embeddings with LUT suffix not found, try without LUT suffix
149
+ if "_lut" in str(path) and "embeddings" in str(path):
150
+ print(f"Failed to find {path}, trying without LUT suffix...")
151
+ # Remove LUT suffix
152
+ path_no_lut = str(path).split("_lut")[0]
153
+ path_no_lut = Path(path_no_lut)
154
+
155
+ # Try candidates without LUT suffix
156
+ candidates_no_lut = [
157
+ path_no_lut,
158
+ path_no_lut.with_suffix('.mlmodelc'),
159
+ path_no_lut.with_suffix('.mlpackage'),
160
+ Path(str(path_no_lut) + '.mlmodelc'),
161
+ Path(str(path_no_lut) + '.mlpackage')
162
+ ]
163
+
164
+ for candidate in candidates_no_lut:
165
+ if candidate.exists():
166
+ return str(candidate)
167
+
168
+ # Add no-LUT candidates to the list for error reporting
169
+ candidates.extend(candidates_no_lut)
170
+
171
+ # If we get here, no valid path was found
172
+ print("\nError: Model not found. Tried following paths:")
173
+ for candidate in candidates:
174
+ print(f" {candidate}")
175
+ raise FileNotFoundError(f"Model not found: {path}")
176
+
177
+ def parse_ffn_filename(path):
178
+ """Parse FFN model filename to extract chunk information."""
179
+ path = Path(path)
180
+ pattern = r'FFN_PF.*_chunk_(\d+)of(\d+)'
181
+ match = re.search(pattern, path.name)
182
+
183
+ if match:
184
+ current_chunk = int(match.group(1))
185
+ total_chunks = int(match.group(2))
186
+ return current_chunk, total_chunks
187
+ return None, None
188
+
189
+ def find_all_chunks(base_path):
190
+ """Find all chunk files matching the base FFN path pattern."""
191
+ path = Path(base_path)
192
+ pattern = re.sub(r'_chunk_\d+of\d+', '_chunk_*', str(path))
193
+ return sorted(glob.glob(pattern))
194
+
195
+ def load_model(path, function_name=None):
196
+ """Load a CoreML model, handling both .mlmodelc and .mlpackage formats."""
197
+ path = Path(path)
198
+ compute_unit = ct.ComputeUnit.CPU_AND_NE
199
+
200
+ try:
201
+ if path.suffix == '.mlmodelc':
202
+ # For compiled models (.mlmodelc), use CompiledMLModel
203
+ if function_name:
204
+ return ct.models.CompiledMLModel(str(path), compute_unit, function_name=function_name)
205
+ else:
206
+ return ct.models.CompiledMLModel(str(path), compute_unit)
207
+ else:
208
+ # For packages (.mlpackage)
209
+ if function_name:
210
+ return ct.models.MLModel(str(path), function_name=function_name)
211
+ else:
212
+ return ct.models.MLModel(str(path))
213
+
214
+ except RuntimeError as e:
215
+ if "valid manifest does not exist" in str(e):
216
+ print(f"\nError: Could not load compiled model at {path}")
217
+ print("This might be because:")
218
+ print("1. The model is not properly compiled")
219
+ print("2. The model was compiled for a different OS version")
220
+ print("3. The model needs to be recompiled")
221
+ print("\nTry using the .mlpackage version instead, or recompile the model.")
222
+ raise
223
+
224
+ def load_metadata(model,args):
225
+ # Extract metadata and config parameters
226
+ metadata = {}
227
+ if hasattr(model, 'user_defined_metadata'):
228
+ meta = model.user_defined_metadata
229
+
230
+ # Extract key parameters with defaults
231
+ metadata['context_length'] = int(meta.get('com.anemll.context_length', 512))
232
+ metadata['state_length'] = int(meta.get('com.anemll.state_length', metadata['context_length'])) # Added state_length
233
+ metadata['batch_size'] = int(meta.get('com.anemll.batch_size', 64))
234
+ metadata['lut_bits'] = int(meta.get('com.anemll.lut_bits', 0))
235
+ metadata['num_chunks'] = int(meta.get('com.anemll.num_chunks', 1))
236
+
237
+ if not args.eval:
238
+ print("\nExtracted Parameters:")
239
+ print(f" Context Length: {metadata['context_length']}")
240
+ print(f" State Length: {metadata['state_length']}")
241
+ print(f" Prefill Batch Size: {metadata['batch_size']}")
242
+ print(f" LUT Bits: {metadata['lut_bits']}")
243
+ print(f" Number of Chunks: {metadata['num_chunks']}")
244
+
245
+ # Print model info
246
+ print("\nModel Info:")
247
+ if 'com.anemll.info' in meta:
248
+ print(f" {meta['com.anemll.info']}")
249
+ if 'com.github.apple.coremltools.version' in meta:
250
+ print(f" CoreML Tools: {meta['com.github.apple.coremltools.version']}")
251
+
252
+ # Print model input/output shapes
253
+ print("\nModel Shapes:")
254
+ if hasattr(model, 'input_description'):
255
+ print(" Inputs:")
256
+ try:
257
+ if hasattr(model.input_description, 'items'):
258
+ for name, desc in model.input_description.items():
259
+ print(f" {name}: {desc}")
260
+ else:
261
+ print(f" {model.input_description}")
262
+ except:
263
+ print(f" Input description: {type(model.input_description)}")
264
+ if hasattr(model, 'output_description'):
265
+ print(" Outputs:")
266
+ try:
267
+ if hasattr(model.output_description, 'items'):
268
+ for name, desc in model.output_description.items():
269
+ print(f" {name}: {desc}")
270
+ else:
271
+ print(f" {model.output_description}")
272
+ except:
273
+ print(f" Output description: {type(model.output_description)}")
274
+ else:
275
+ if not args.eval:
276
+ print("\nWarning: No metadata found in model")
277
+
278
+ # Check if model directory name contains context length pattern (ctxXXX)
279
+ ctx_len = 512
280
+ if args.context_length is None:
281
+ import re
282
+ ctx_match = re.search(r'ctx(\d+)', str(args.d))
283
+ if ctx_match:
284
+ ctx_len0 = int(ctx_match.group(1))
285
+ if 512 <= ctx_len0 <= 8096:
286
+ ctx_len = ctx_len0
287
+ print(f"\nDetected context length {ctx_len} from directory name")
288
+ else:
289
+ print(f"\nWarning: No context length found in directory {ctx_len} from directory name {args.d}")
290
+ else:
291
+ ctx_len = args.context_length
292
+
293
+ # Use defaults or values from args
294
+ metadata['context_length'] = ctx_len
295
+ metadata['state_length'] = ctx_len
296
+ # Get batch size from args or use default
297
+ metadata['batch_size'] = getattr(args, 'batch_size', 64)
298
+ metadata['lut_bits'] = 4
299
+ metadata['num_chunks'] = getattr(args, 'num_chunks', 4)
300
+ if not args.eval:
301
+ print("\nUsing parameters:")
302
+ print(f" Context Length: {metadata['context_length']}")
303
+ print(f" State Length: {metadata['state_length']}")
304
+ print(f" Prefill Batch Size: {metadata['batch_size']}")
305
+ print(f" LUT Bits: {metadata['lut_bits']}")
306
+ print(f" Number of Chunks: {metadata['num_chunks']}")
307
+
308
+ # Override with values from args if they exist
309
+ if hasattr(args, 'batch_size') and args.batch_size is not None:
310
+ metadata['batch_size'] = args.batch_size
311
+ if not args.eval:
312
+ print(f"\nOverriding batch size from args: {args.batch_size}")
313
+ if hasattr(args, 'num_chunks') and args.num_chunks is not None:
314
+ metadata['num_chunks'] = args.num_chunks
315
+ if not args.eval:
316
+ print(f"\nOverriding num chunks from args: {args.num_chunks}")
317
+
318
+ return metadata
319
+
320
+ def load_models(args,metadata):
321
+ """Load all required models and extract metadata."""
322
+ if not args.eval:
323
+ print("\nLoading models...")
324
+
325
+ try:
326
+ # Load embeddings model
327
+ if not args.eval:
328
+ print("\nLoading embeddings model...")
329
+ embed_path = parse_model_path(args.embed)
330
+ if not args.eval:
331
+ print(f"Loading from: {embed_path}")
332
+ embed_model = load_model(embed_path)
333
+ if not args.eval:
334
+ print("Embeddings model loaded successfully")
335
+ metadata = load_metadata(embed_model,args)
336
+
337
+
338
+
339
+ # Load LM head model
340
+ if not args.eval:
341
+ print("\nLoading LM head model...")
342
+ lmhead_path = parse_model_path(args.lmhead)
343
+ if not args.eval:
344
+ print(f"Loading from: {lmhead_path}")
345
+ lmhead_model = load_model(lmhead_path)
346
+ if not args.eval:
347
+ print("LM head model loaded successfully")
348
+
349
+ # Parse FFN path and find chunks if needed
350
+ if not args.eval:
351
+ print("\nLoading FFN+PREFILL model(s)...")
352
+ ffn_path = parse_model_path(args.ffn)
353
+ chunk_no, total_chunks = parse_ffn_filename(ffn_path)
354
+
355
+ ffn_models = []
356
+ if chunk_no and total_chunks:
357
+ if not args.eval:
358
+ print(f"\nDetected chunked FFN+PREFILL model ({total_chunks} chunks)")
359
+ # Find and load all chunks
360
+ chunk_paths = find_all_chunks(ffn_path)
361
+ if len(chunk_paths) != total_chunks:
362
+ raise ValueError(f"Found {len(chunk_paths)} chunks but filename indicates {total_chunks} chunks")
363
+
364
+ for chunk_path in chunk_paths:
365
+ if not args.eval:
366
+ print(f"\nLoading FFN+PREFILL chunk: {Path(chunk_path).name}")
367
+ try:
368
+ # For chunked models, we need both infer and prefill functions
369
+ ffn_models.append({
370
+ 'infer': load_model(chunk_path, function_name='infer'),
371
+ 'prefill': load_model(chunk_path, function_name='prefill')
372
+ })
373
+ if not args.eval:
374
+ print("Chunk loaded successfully")
375
+ except Exception as e:
376
+ if not args.eval:
377
+ print(f"Error loading chunk {chunk_path}: {str(e)}")
378
+ raise
379
+ metadata = load_metadata(ffn_models[0],args)
380
+
381
+ else:
382
+ if not args.eval:
383
+ print("\nLoading single FFN model...")
384
+ ffn_models.append(load_model(ffn_path))
385
+ if not args.eval:
386
+ print("FFN model loaded successfully")
387
+
388
+ return embed_model, ffn_models, lmhead_model, metadata
389
+
390
+ except Exception as e:
391
+ print(f"\nError loading models: {str(e)}")
392
+ print("\nPlease ensure all model files exist and are accessible.")
393
+ print("Expected files:")
394
+ print(f" Embeddings: {args.embed}")
395
+ print(f" LM Head: {args.lmhead}")
396
+ print(f" FFN: {args.ffn}")
397
+ raise
398
+
399
+ # At the top of the file, make this a default path
400
+
401
+ def initialize_tokenizer(model_path=None, eval_mode=False):
402
+ """Initialize and configure the tokenizer."""
403
+ try:
404
+
405
+
406
+ tokenizer = AutoTokenizer.from_pretrained(
407
+ str(model_path),
408
+ use_fast=False,
409
+ trust_remote_code=True
410
+ )
411
+
412
+ if not eval_mode:
413
+ print("\nTokenizer Configuration:")
414
+ print(f"Tokenizer type: {type(tokenizer)}")
415
+ print(f"Tokenizer name: {tokenizer.__class__.__name__}")
416
+ print(f"Vocabulary size: {len(tokenizer)}")
417
+ print(f"Model max length: {tokenizer.model_max_length}")
418
+
419
+ if tokenizer.pad_token is None:
420
+ tokenizer.pad_token = tokenizer.eos_token
421
+ tokenizer.pad_token_id = tokenizer.eos_token_id
422
+ if not eval_mode:
423
+ print("Set PAD token to EOS token")
424
+
425
+ tokenizer.padding_side = "left"
426
+
427
+ if not eval_mode:
428
+ print(f"\nSpecial Tokens:")
429
+ print(f"PAD token: '{tokenizer.pad_token}' (ID: {tokenizer.pad_token_id})")
430
+ print(f"EOS token: '{tokenizer.eos_token}' (ID: {tokenizer.eos_token_id})")
431
+ print(f"BOS token: '{tokenizer.bos_token}' (ID: {tokenizer.bos_token_id})")
432
+ print(f"UNK token: '{tokenizer.unk_token}' (ID: {tokenizer.unk_token_id})")
433
+
434
+ return tokenizer
435
+
436
+ except Exception as e:
437
+ print(f"\nError: Failed to load tokenizer from {model_path}")
438
+ print(f"Error details: {str(e)}")
439
+ print(f"Error type: {type(e)}")
440
+ print("\nThis appears to be a tokenizer loading issue.")
441
+
442
+ # Check if it's the specific Qwen tokenizer file issue
443
+ if "expected str, bytes or os.PathLike object, not NoneType" in str(e):
444
+ print("\nThis error suggests the tokenizer files are missing or incomplete.")
445
+ print("For Qwen models, you need the original model directory with tokenizer files.")
446
+ print("Try using: --tokenizer ~/.cache/huggingface/hub/models--Qwen--Qwen3-0.6B/snapshots/YOUR_SNAPSHOT_ID")
447
+ else:
448
+ print("Please provide the path to a compatible model directory with tokenizer files.")
449
+ import traceback
450
+ traceback.print_exc()
451
+ raise
452
+
453
+
454
+
455
+ def make_causal_mask(length, start):
456
+ """Create causal attention mask."""
457
+ mask = np.full((1, 1, length, length), -np.inf, dtype=np.float16)
458
+ row_indices = np.arange(length).reshape(length, 1)
459
+ col_indices = np.arange(length).reshape(1, length)
460
+ mask[:, :, col_indices <= (row_indices + start)] = 0
461
+ return mask
462
+
463
+ def initialize_causal_mask(context_length, eval_mode=False):
464
+ """Initialize causal mask for transformer attention."""
465
+ causal_mask = make_causal_mask(context_length, 0)
466
+ causal_mask = torch.tensor(causal_mask, dtype=torch.float16)
467
+ if not eval_mode:
468
+ print(f"\nInitialized causal mask for context length {context_length}")
469
+ return causal_mask
470
+
471
+ def run_prefill(embed_model, ffn_models, input_ids, context_pos, context_length, batch_size=64, state=None, causal_mask=None):
472
+ """Run prefill on the input sequence."""
473
+ # Use provided causal mask or create one if not provided
474
+ if causal_mask is None:
475
+ causal_mask = make_causal_mask(context_length, 0)
476
+ causal_mask = torch.tensor(causal_mask, dtype=torch.float16)
477
+
478
+ # Process in batches
479
+ batch_pos = 0
480
+ while batch_pos < context_pos:
481
+ batch_end = min(batch_pos + batch_size, context_pos)
482
+ current_batch_size = batch_end - batch_pos
483
+
484
+ # Get current batch
485
+ batch_input = input_ids[:, batch_pos:batch_end]
486
+
487
+ # Always pad to full batch size for prefill
488
+ batch_input = F.pad(
489
+ batch_input,
490
+ (0, batch_size - current_batch_size),
491
+ value=0
492
+ )
493
+
494
+ # Generate position IDs for full batch size
495
+ position_ids = torch.arange(batch_pos, batch_pos+batch_size, dtype=torch.int32) # Changed: Always use full batch size
496
+ batch_causal_mask = causal_mask[:, :, batch_pos:batch_pos+batch_size, :] # Changed: Use full batch size
497
+
498
+ # Run embeddings
499
+ hidden_states = torch.from_numpy(
500
+ embed_model.predict({
501
+ 'input_ids': batch_input.numpy().astype(np.int32)
502
+ })['hidden_states']
503
+ )
504
+
505
+ # Run through FFN chunks with state
506
+ for ffn_model in ffn_models:
507
+ if isinstance(ffn_model, dict):
508
+ inputs = {
509
+ 'hidden_states': hidden_states.numpy().astype(np.float16), # [1, 64, hidden_size]
510
+ 'position_ids': position_ids.numpy().astype(np.int32), # [64]
511
+ 'causal_mask': batch_causal_mask.numpy().astype(np.float16), # [1, 1, 64, context_length]
512
+ 'current_pos': np.array([batch_pos], dtype=np.int32) # [1]
513
+ }
514
+ output = ffn_model['prefill'].predict(inputs, state)
515
+ hidden_states = torch.from_numpy(output['output_hidden_states'])
516
+
517
+ batch_pos = batch_end
518
+
519
+ return torch.tensor([context_pos], dtype=torch.int32)
520
+
521
+ def generate_next_token(embed_model, ffn_models, lmhead_model, input_ids, pos, context_length, metadata, state=None, causal_mask=None, temperature=0.0):
522
+ """Generate the next token."""
523
+ # Get current token
524
+ current_token = input_ids[:, pos-1:pos] # [1, 1]
525
+
526
+ # Ensure proper data type for CoreML
527
+ current_token_array = current_token.numpy().astype(np.int32)
528
+
529
+ # Run embeddings
530
+ hidden_states = torch.from_numpy(
531
+ embed_model.predict({'input_ids': current_token_array})['hidden_states']
532
+ ) # [1, 1, hidden_size]
533
+
534
+ # Create masks
535
+ update_mask = torch.zeros((1, 1, context_length, 1), dtype=torch.float16)
536
+ update_mask[0, 0, pos-1, 0] = 1.0
537
+ position_ids = torch.tensor([pos-1], dtype=torch.int32) # [1]
538
+
539
+ # Use provided causal mask or create one if not provided
540
+ if causal_mask is None:
541
+ causal_mask_data = make_causal_mask(context_length, 0)
542
+ single_causal_mask = torch.tensor(causal_mask_data[:, :, pos-1:pos, :], dtype=torch.float16) # [1, 1, 1, context_length]
543
+ else:
544
+ single_causal_mask = causal_mask[:, :, pos-1:pos, :]
545
+
546
+ # Run through FFN chunks with state
547
+ for ffn_model in ffn_models:
548
+ if isinstance(ffn_model, dict):
549
+ inputs = {
550
+ 'hidden_states': hidden_states.numpy().astype(np.float16),
551
+ 'update_mask': update_mask.numpy().astype(np.float16),
552
+ 'position_ids': position_ids.numpy().astype(np.int32),
553
+ 'causal_mask': single_causal_mask.numpy().astype(np.float16),
554
+ 'current_pos': position_ids.numpy().astype(np.int32)
555
+ }
556
+ output = ffn_model['infer'].predict(inputs, state)
557
+ hidden_states = torch.from_numpy(output['output_hidden_states'])
558
+
559
+ # Run LM head
560
+ lm_output = lmhead_model.predict({'hidden_states': hidden_states.numpy().astype(np.float16)})
561
+ # Debug print
562
+ #print("\nLM Head output keys:", list(lm_output.keys()))
563
+
564
+ # Get number of logits from metadata, using split_lm_head if available
565
+ # First check for split_lm_head (new), then num_logits (legacy), default to 8
566
+ num_logits = metadata.get('split_lm_head', metadata.get('num_logits', 8))
567
+
568
+ # Combine logits1-N if they exist
569
+ if 'logits1' in lm_output:
570
+ # Concatenate all logits parts
571
+ logits_parts = []
572
+ for i in range(1, num_logits + 1):
573
+ key = f'logits{i}'
574
+ if key in lm_output:
575
+ logits_parts.append(torch.from_numpy(lm_output[key]))
576
+ logits = torch.cat(logits_parts, dim=-1) # Concatenate along vocab dimension
577
+ else:
578
+ # Try output_logits as fallback
579
+ logits = torch.from_numpy(lm_output['output_logits'])
580
+
581
+ # Apply temperature and sample
582
+ if temperature > 0:
583
+ logits = logits / temperature
584
+ probs = F.softmax(logits[0, -1, :], dim=-1)
585
+ next_token = torch.multinomial(probs, num_samples=1).item()
586
+ else:
587
+ next_token = torch.argmax(logits[0, -1, :]).item()
588
+
589
+ return next_token
590
+
591
+ def create_unified_state(ffn_models, context_length, eval_mode=False):
592
+ """Create unified KV cache state for transformer."""
593
+ if isinstance(ffn_models[0], dict):
594
+ # Use first FFN model's prefill function to create state
595
+ state = ffn_models[0]['prefill'].make_state()
596
+ if not eval_mode:
597
+ print(f"\nCreated unified transformer state for {len(ffn_models)} chunks")
598
+ return state
599
+ else:
600
+ state = ffn_models[0].make_state()
601
+ if not eval_mode:
602
+ print("\nCreated unified transformer state")
603
+ return state
604
+
605
+ def chat_loop(embed_model, ffn_models, lmhead_model, tokenizer, metadata, state, causal_mask=None, auto_prompt=None, warmup=False, save_file=None, max_tokens=None, no_template=False, eval_mode=False):
606
+ """Interactive chat loop."""
607
+ context_length = metadata.get('context_length')
608
+ batch_size = metadata.get('batch_size', 64)
609
+
610
+ if not warmup and not eval_mode:
611
+ print(f"\nUsing context length: {context_length}")
612
+ print("\nStarting chat session. Press Ctrl+D to exit.")
613
+ print("Type your message and press Enter to chat.")
614
+
615
+ # Check if tokenizer has chat template and if it works
616
+ has_chat_template = False
617
+ try:
618
+ # Test if chat template works
619
+ test_messages = [{"role": "user", "content": "test"}]
620
+ tokenizer.apply_chat_template(test_messages, return_tensors="pt")
621
+ has_chat_template = True
622
+ if not warmup and not eval_mode:
623
+ print("\nUsing chat template for prompts")
624
+ except:
625
+ if not warmup and not eval_mode:
626
+ print("\nUsing manual formatting for prompts")
627
+
628
+ conversation = []
629
+
630
+ try:
631
+ while True:
632
+ try:
633
+ if not warmup and not eval_mode:
634
+ print(f"\n{LIGHT_GREEN}You:{RESET_COLOR}", end=' ', flush=True)
635
+ if auto_prompt is not None:
636
+ user_input = auto_prompt
637
+ if not warmup and not eval_mode:
638
+ print(user_input)
639
+ else:
640
+ user_input = input().strip()
641
+ except EOFError:
642
+ if not warmup and not eval_mode:
643
+ print("\nExiting chat...")
644
+ break
645
+
646
+ if not user_input:
647
+ continue
648
+
649
+ # Format prompt based on no_template flag and tokenizer capabilities
650
+ if no_template:
651
+ # Use raw input without any chat template formatting
652
+ input_ids = tokenizer(
653
+ user_input,
654
+ return_tensors="pt",
655
+ add_special_tokens=True
656
+ ).input_ids.to(torch.int32)
657
+ if not warmup and not eval_mode:
658
+ print("Using raw input without chat template")
659
+ elif has_chat_template:
660
+ messages = [{"role": "user", "content": user_input}]
661
+ input_ids = tokenizer.apply_chat_template(
662
+ messages,
663
+ return_tensors="pt",
664
+ add_generation_prompt=True
665
+ ).to(torch.int32)
666
+ else:
667
+ # Manual formatting for Llama models without chat template
668
+ formatted_prompt = f"[INST] {user_input} [/INST]"
669
+ input_ids = tokenizer(
670
+ formatted_prompt,
671
+ return_tensors="pt",
672
+ add_special_tokens=True
673
+ ).input_ids.to(torch.int32)
674
+
675
+ context_pos = input_ids.size(1)
676
+
677
+ if not warmup and not eval_mode:
678
+ print(f"\n{LIGHT_BLUE}Assistant:{RESET_COLOR}", end=' ', flush=True)
679
+
680
+ # Initialize token printer
681
+ token_printer = TokenPrinter(tokenizer)
682
+ tokens_generated = 0 # Track number of tokens
683
+
684
+ try:
685
+ # Start prefill timing
686
+ prefill_start = time.time()
687
+
688
+ # Run prefill with state and causal mask
689
+ # Ensure batch_size is not None
690
+ if batch_size is None:
691
+ batch_size = 64
692
+ if not eval_mode:
693
+ print(f"Warning: batch_size was None, using default: {batch_size}")
694
+
695
+ _ = run_prefill(
696
+ embed_model,
697
+ ffn_models,
698
+ input_ids,
699
+ context_pos,
700
+ context_length,
701
+ batch_size,
702
+ state,
703
+ causal_mask
704
+ )
705
+
706
+ # Calculate prefill timing
707
+ prefill_time = time.time() - prefill_start
708
+ prefill_tokens = context_pos # Number of tokens in input
709
+ prefill_tokens_per_sec = prefill_tokens / prefill_time if prefill_time > 0 else 0
710
+
711
+ # Generation loop with state
712
+ input_ids = input_ids
713
+ pos = context_pos
714
+ inference_start = time.time()
715
+ inference_tokens = 0
716
+
717
+ while pos < context_length - 1:
718
+ # Generate next token with causal mask
719
+ next_token = generate_next_token(
720
+ embed_model,
721
+ ffn_models,
722
+ lmhead_model,
723
+ input_ids,
724
+ pos,
725
+ context_length,
726
+ metadata,
727
+ state,
728
+ causal_mask
729
+ )
730
+
731
+ # Add token to sequence
732
+ if pos < input_ids.size(1):
733
+ input_ids[0, pos] = next_token
734
+ else:
735
+ input_ids = torch.cat([
736
+ input_ids,
737
+ torch.tensor([[next_token]], dtype=torch.int32)
738
+ ], dim=1)
739
+
740
+ # Add to printer only if not in warmup
741
+ if not warmup:
742
+ token_printer.add_token(next_token)
743
+ token_printer.drain_buffer(eval_mode)
744
+
745
+ pos += 1
746
+ tokens_generated += 1
747
+ inference_tokens += 1
748
+
749
+ # Check limits
750
+ if warmup and tokens_generated >= WARMUP_TOKEN_LIMIT:
751
+ break
752
+
753
+ # Check max_tokens limit
754
+ if max_tokens is not None and tokens_generated >= max_tokens:
755
+ break
756
+
757
+ # Check for all possible EOS tokens
758
+ eos_token_ids = tokenizer.eos_token_id
759
+ if isinstance(eos_token_ids, list):
760
+ if next_token in eos_token_ids:
761
+ break
762
+ else:
763
+ if next_token == eos_token_ids:
764
+ break
765
+
766
+ # Calculate inference timing
767
+ inference_time = time.time() - inference_start
768
+ inference_tokens_per_sec = inference_tokens / inference_time if inference_time > 0 else 0
769
+
770
+ # Get final response and add to conversation
771
+ if not warmup:
772
+ response = token_printer.stop(eval_mode)
773
+ if eval_mode:
774
+ # In eval mode, only print the model response
775
+ print(response, end='')
776
+ else:
777
+ # Print timing stats
778
+ prefill_ms = prefill_time * 1000 # Convert to milliseconds
779
+ print(f"\nPrefill: {prefill_ms:.1f}ms ({prefill_tokens_per_sec:.1f} t/s)")
780
+ print(f"Inference: {inference_tokens_per_sec:.1f} t/s")
781
+ print(f"Total: Generated {tokens_generated} tokens in {prefill_time + inference_time:.2f}s")
782
+ conversation.append({"role": "assistant", "content": response})
783
+
784
+ # Save response to file if requested
785
+ if save_file and not eval_mode:
786
+ try:
787
+ # Add small delay to ensure all tokens are processed
788
+ time.sleep(0.5)
789
+
790
+ # Make sure response ends with EOS token if it's supposed to
791
+ if response and not response.endswith("<|eot_id|>") and not response.endswith("</s>"):
792
+ if tokenizer.eos_token:
793
+ eos_text = tokenizer.decode([tokenizer.eos_token_id])
794
+ if not response.endswith(eos_text):
795
+ print(f"\n{DARK_BLUE}Adding missing EOS token for consistency{RESET_COLOR}")
796
+ response += eos_text
797
+
798
+ with open(save_file, 'w') as f:
799
+ f.write(response)
800
+ print(f"\n{DARK_BLUE}Response saved to file: {save_file}{RESET_COLOR}")
801
+ except Exception as e:
802
+ print(f"\n{DARK_BLUE}Error saving to file: {str(e)}{RESET_COLOR}")
803
+ else:
804
+ token_printer.stop(eval_mode) # Clean up without printing stats
805
+
806
+ # Exit after one response in auto_prompt mode
807
+ if auto_prompt is not None:
808
+ break
809
+
810
+ except KeyboardInterrupt:
811
+ if not eval_mode:
812
+ print("\nGeneration interrupted")
813
+ token_printer.stop(eval_mode)
814
+ continue
815
+
816
+ except Exception as e:
817
+ print(f"\nError in chat loop: {str(e)}")
818
+ import traceback
819
+ traceback.print_exc()
820
+
821
+ def parse_args():
822
+ parser = argparse.ArgumentParser(description='Chat with CoreML LLaMA, gil resolved (c) 2025 Anemll')
823
+
824
+ # Add meta.yaml option
825
+ parser.add_argument('--meta', type=str, help='Path to meta.yaml to load all parameters')
826
+
827
+ # Model paths
828
+ parser.add_argument('--d', '--dir', type=str, default='.',
829
+ help='Directory containing model files (default: current directory)')
830
+ parser.add_argument('--embed', type=str, required=False,
831
+ help='Path to embeddings model (relative to --dir)')
832
+ parser.add_argument('--ffn', type=str, required=False,
833
+ help='Path to FFN model (can be chunked, relative to --dir)')
834
+ parser.add_argument('--lmhead', type=str, required=False,
835
+ help='Path to LM head model (relative to --dir)')
836
+ parser.add_argument('--tokenizer', type=str, required=False,
837
+ help='Path to tokenizer')
838
+
839
+ # Add new argument for auto-generation
840
+ parser.add_argument('--prompt', type=str,
841
+ help='If specified, run once with this prompt and exit')
842
+
843
+ # Add save option
844
+ parser.add_argument('--save', type=str,
845
+ help='Save assistant\'s response to specified file')
846
+
847
+ # Add max-tokens option
848
+ parser.add_argument('--max-tokens', type=int,
849
+ help='Maximum number of tokens to generate')
850
+
851
+ # Add no-warmup flag
852
+ parser.add_argument('--nw', action='store_true',
853
+ help='Skip warmup phase')
854
+
855
+ # Add no-template flag
856
+ parser.add_argument('--no-template', action='store_true',
857
+ help='Prefill the question itself and start inference directly without chat template')
858
+
859
+ # Add eval mode flag
860
+ parser.add_argument('--eval', action='store_true',
861
+ help='Evaluation mode: suppress all output except model response')
862
+
863
+ # Model configuration
864
+ parser.add_argument('--context-length', type=int,
865
+ help='Context length for the model (default: 512), if not provided, it will be detected from the model directory name ctxNUMBER')
866
+ parser.add_argument('--batch-size', type=int,
867
+ help='Batch size for prefill (default: 64)')
868
+ parser.add_argument('--num-logits', type=int, default=8,
869
+ help='Number of logits outputs from LM head (default: 8, legacy)')
870
+ parser.add_argument('--split-lm-head', type=int,
871
+ help='Number of logits splits from LM head (default: 8 for llama, 16 for qwen)')
872
+
873
+ args = parser.parse_args()
874
+
875
+ # If meta.yaml is provided, load parameters from it
876
+ if args.meta:
877
+ try:
878
+ with open(args.meta, 'r') as f:
879
+ meta = yaml.safe_load(f)
880
+ params = meta['model_info']['parameters']
881
+
882
+ # Set model directory to meta.yaml directory if not specified
883
+ if not args.d or args.d == '.':
884
+ args.d = str(Path(args.meta).parent)
885
+
886
+ # Build model paths based on parameters
887
+ prefix = params.get('model_prefix', 'llama') # Default to 'llama' if not specified
888
+ lut_ffn = f"_lut{params['lut_ffn']}" if params['lut_ffn'] != 'none' else ''
889
+ lut_lmhead = f"_lut{params['lut_lmhead']}" if params['lut_lmhead'] != 'none' else ''
890
+ lut_embeddings = f"_lut{params['lut_embeddings']}" if params['lut_embeddings'] != 'none' else ''
891
+ num_chunks = int(params['num_chunks'])
892
+
893
+ # Set model paths if not specified
894
+ if not args.lmhead:
895
+ args.lmhead = f'{prefix}_lm_head{lut_lmhead}'
896
+ if not args.embed:
897
+ args.embed = f'{prefix}_embeddings{lut_embeddings}' # Changed from lm_head to embeddings
898
+ if not args.ffn:
899
+ args.ffn = f'{prefix}_FFN_PF{lut_ffn}_chunk_01of{num_chunks:02d}'
900
+ if not args.tokenizer:
901
+ # Check if there's a tokenizer_path parameter in meta.yaml
902
+ if 'tokenizer_path' in params:
903
+ args.tokenizer = params['tokenizer_path']
904
+ else:
905
+ # Default to the model directory, but this might need manual override
906
+ args.tokenizer = args.d
907
+
908
+ # Set other parameters if not overridden by command line
909
+ if args.context_length is None:
910
+ args.context_length = int(params['context_length'])
911
+ if args.batch_size is None:
912
+ args.batch_size = int(params['batch_size'])
913
+ args.num_chunks = num_chunks
914
+ # Add num_logits parameter with default of 8, override command line if present in meta
915
+ if 'num_logits' in params:
916
+ args.num_logits = int(params['num_logits'])
917
+
918
+ # Add split_lm_head parameter with default of 8
919
+ if 'split_lm_head' in params:
920
+ args.split_lm_head = int(params['split_lm_head'])
921
+ else:
922
+ args.split_lm_head = 8 # Default value for backward compatibility
923
+
924
+ if not args.eval:
925
+ print(f"\nLoaded parameters from {args.meta}:")
926
+ print(f" Context Length: {args.context_length}")
927
+ print(f" Batch Size: {args.batch_size}")
928
+ print(f" Num Chunks: {args.num_chunks}")
929
+ print(f" Num Logits: {args.num_logits}")
930
+ print(f" Split LM Head: {args.split_lm_head}")
931
+ print(f" Models Directory: {args.d}")
932
+ print(f" Embeddings: {args.embed}")
933
+ print(f" LM Head: {args.lmhead}")
934
+ print(f" FFN: {args.ffn}")
935
+
936
+ except Exception as e:
937
+ print(f"\nError loading meta.yaml: {str(e)}")
938
+ sys.exit(1)
939
+ else:
940
+ # If no meta.yaml, set default split_lm_head if not provided
941
+ if not hasattr(args, 'split_lm_head') or args.split_lm_head is None:
942
+ args.split_lm_head = args.num_logits # Use num_logits as fallback
943
+
944
+ return args
945
+
946
+ def main():
947
+ args = parse_args()
948
+
949
+ # Convert directory to absolute path
950
+ model_dir = Path(args.d).resolve()
951
+ if not model_dir.exists():
952
+ if not args.eval:
953
+ print(f"\nError: Model directory not found: {model_dir}")
954
+ return 1
955
+
956
+ if not args.eval:
957
+ print(f"\nUsing model directory: {model_dir}")
958
+ print(f"Context length: {args.context_length}")
959
+
960
+ try:
961
+ # Update paths to be relative to model directory
962
+ args.embed = str(model_dir / args.embed)
963
+ args.ffn = str(model_dir / args.ffn)
964
+ args.lmhead = str(model_dir / args.lmhead)
965
+
966
+ # Handle tokenizer path separately since it's not relative to model_dir
967
+ if args.tokenizer is None:
968
+ args.tokenizer = str(model_dir)
969
+
970
+ # Check if tokenizer directory exists and has required files
971
+ tokenizer_path = Path(args.tokenizer)
972
+ if not tokenizer_path.exists():
973
+ if not args.eval:
974
+ print(f"\nError: Tokenizer directory not found: {args.tokenizer}")
975
+ return 1
976
+
977
+ # Check if tokenizer has the required files
978
+ required_files = ['tokenizer.json', 'tokenizer_config.json']
979
+ missing_files = [f for f in required_files if not (tokenizer_path / f).exists()]
980
+
981
+ if missing_files and not args.eval:
982
+ print(f"\nWarning: Tokenizer directory missing required files: {missing_files}")
983
+ print(f"Current tokenizer path: {args.tokenizer}")
984
+ print("\nFor Qwen models, you may need to specify the original model directory:")
985
+ print(" python chat.py --meta /tmp/qwen/meta.yaml --tokenizer ~/.cache/huggingface/hub/models--Qwen--Qwen3-0.6B/snapshots/YOUR_SNAPSHOT_ID")
986
+ print("\nOr add 'tokenizer_path' to your meta.yaml file.")
987
+
988
+ args.tokenizer = str(Path(args.tokenizer).resolve()) # Convert to absolute path
989
+ if not args.eval:
990
+ print(f"Using tokenizer path: {args.tokenizer}")
991
+
992
+ metadata = {}
993
+ # Load models and extract metadata
994
+ embed_model, ffn_models, lmhead_model, metadata = load_models(args,metadata)
995
+
996
+ if not args.eval:
997
+ print(f"\nMetadata befor args.context_length: {metadata}")
998
+
999
+ # Override context length from command line if provided
1000
+ if args.context_length is not None:
1001
+ metadata['context_length'] = args.context_length
1002
+ metadata['state_length'] = args.context_length # Also update state_length
1003
+ if not args.eval:
1004
+ print(f"\nOverriding context length from command line: {args.context_length}")
1005
+
1006
+ # Add num_logits to metadata (legacy support)
1007
+ metadata['num_logits'] = getattr(args, 'num_logits', 8)
1008
+
1009
+ # Add split_lm_head to metadata (preferred)
1010
+ metadata['split_lm_head'] = getattr(args, 'split_lm_head', getattr(args, 'num_logits', 8))
1011
+
1012
+ if not args.eval:
1013
+ print(f"\nMetadata after load_models: {metadata}")
1014
+ print(f"Using split_lm_head value: {metadata.get('split_lm_head', 8)}")
1015
+
1016
+ # Load tokenizer with resolved path
1017
+ tokenizer = initialize_tokenizer(args.tokenizer, args.eval)
1018
+ if tokenizer is None:
1019
+ raise RuntimeError("Failed to initialize tokenizer")
1020
+
1021
+ # Create unified state once
1022
+ state = create_unified_state(ffn_models, metadata['context_length'], args.eval)
1023
+
1024
+ # Initialize causal mask once
1025
+ causal_mask = initialize_causal_mask(metadata['context_length'], args.eval)
1026
+
1027
+ # Warmup runs to prevent Python GIL issues with CoreML !
1028
+ if not args.nw and not args.eval:
1029
+ for _ in range(2):
1030
+ chat_loop(
1031
+ embed_model=embed_model,
1032
+ ffn_models=ffn_models,
1033
+ lmhead_model=lmhead_model,
1034
+ tokenizer=tokenizer,
1035
+ metadata=metadata,
1036
+ state=state,
1037
+ causal_mask=causal_mask, # Pass the causal mask
1038
+ warmup=True,
1039
+ auto_prompt="who are you?",
1040
+ no_template=args.no_template,
1041
+ eval_mode=args.eval
1042
+ )
1043
+
1044
+ # Main run
1045
+ chat_loop(
1046
+ embed_model=embed_model,
1047
+ ffn_models=ffn_models,
1048
+ lmhead_model=lmhead_model,
1049
+ tokenizer=tokenizer,
1050
+ metadata=metadata,
1051
+ state=state,
1052
+ causal_mask=causal_mask, # Pass the causal mask
1053
+ warmup=False,
1054
+ auto_prompt=args.prompt,
1055
+ save_file=args.save,
1056
+ max_tokens=args.max_tokens,
1057
+ no_template=args.no_template,
1058
+ eval_mode=args.eval
1059
+ )
1060
+
1061
+ except Exception as e:
1062
+ if not args.eval:
1063
+ print(f"\nError: {str(e)}")
1064
+ import traceback
1065
+ traceback.print_exc()
1066
+ return 1
1067
+
1068
+ return 0
1069
+
1070
+ if __name__ == "__main__":
1071
+ exit(main())
chat_full.py ADDED
@@ -0,0 +1,1031 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # chat.py
2
+ #!/usr/bin/env python3
3
+ # chat.py
4
+ # Copyright (c) 2025 Anemll
5
+ # Licensed under the MIT License
6
+
7
+ import argparse
8
+ import os
9
+ import re
10
+ import glob
11
+ from pathlib import Path
12
+ import coremltools as ct
13
+ from transformers import LlamaTokenizer, AutoTokenizer
14
+ import torch
15
+ import torch.nn.functional as F
16
+ import numpy as np
17
+ import queue
18
+ import threading
19
+ import time
20
+ import yaml
21
+ import sys
22
+
23
+ # ANSI color codes
24
+ LIGHT_BLUE = "\033[94m"
25
+ DARK_BLUE = "\033[34m"
26
+ LIGHT_GREEN = "\033[92m"
27
+ RESET_COLOR = "\033[0m"
28
+
29
+ # Add at the top with other constants
30
+ WARMUP_TOKEN_LIMIT = 10 # Maximum tokens to generate during warmup
31
+ THINKING_MODE = False
32
+ THINKING_PROMPT = """You are a deep thinking AI, you may use extremely long chains of thought to deeply consider the problem and deliberate with yourself via systematic reasoning processes to help come to a correct solution prior to answering. You should enclose your thoughts and internal monologue inside <think> </think> tags, and then provide your solution or response to the problem."""
33
+ DEBUG_LEVEL = 0 # Default debug level
34
+
35
+ class TokenPrinter:
36
+ """Handles background printing of generated tokens."""
37
+ def __init__(self, tokenizer):
38
+ self.tokenizer = tokenizer
39
+ self.token_queue = queue.Queue()
40
+ self.stop_event = threading.Event()
41
+ self.thread = None
42
+ self.buffer = ""
43
+ self.lock = threading.Lock()
44
+ self.thinking = True # Track if we're still in thinking mode
45
+ self.decoding_buffer = [] # Buffer for token IDs
46
+ # Timing and stats tracking
47
+ self.start_time = time.time()
48
+ self.token_count = 0
49
+ self.prefill_time = 0
50
+ self.inference_time = 0
51
+ self.context_pos = 0
52
+ self.start()
53
+
54
+ def start(self):
55
+ """Start the printer thread."""
56
+ if self.thread is None:
57
+ self.thread = threading.Thread(target=self._print_worker)
58
+ self.thread.daemon = True
59
+ self.thread.start()
60
+
61
+ def add_token(self, token_id):
62
+ """Add a token to the print queue."""
63
+ if not self.stop_event.is_set():
64
+ self.token_queue.put(token_id)
65
+ self.token_count += 1
66
+
67
+ def drain_buffer(self):
68
+ """Decode token IDs from decoding_buffer in the main thread."""
69
+ if not self.decoding_buffer:
70
+ return
71
+
72
+ # Decode all tokens at once in the main thread
73
+ token_str = self.tokenizer.decode(self.decoding_buffer)
74
+ self.decoding_buffer.clear()
75
+
76
+ # Color-handling logic
77
+ if self.thinking and "</think>" in token_str:
78
+ self.thinking = False
79
+ parts = token_str.split("</think>")
80
+ if len(parts) > 0:
81
+ print(parts[0] + "</think>", end='', flush=True)
82
+ if len(parts) > 1:
83
+ print(LIGHT_BLUE + parts[1], end='', flush=True)
84
+ else:
85
+ if not self.thinking:
86
+ print(LIGHT_BLUE + token_str, end='', flush=True)
87
+ else:
88
+ print(token_str, end='', flush=True)
89
+
90
+ def _print_worker(self):
91
+ """Worker thread that takes token_ids from the queue."""
92
+ while not self.stop_event.is_set():
93
+ try:
94
+ token_id = self.token_queue.get(timeout=0.01)
95
+ with self.lock:
96
+ self.decoding_buffer.append(token_id)
97
+ self.token_queue.task_done()
98
+ except queue.Empty:
99
+ continue
100
+ except Exception as e:
101
+ print(f"\nError: Token printer error: {str(e)}")
102
+ break
103
+
104
+ def stop(self):
105
+ """Stop the printer thread."""
106
+ if self.thread and self.thread.is_alive():
107
+ self.stop_event.set()
108
+ try:
109
+ self.thread.join(timeout=1.0)
110
+ except Exception:
111
+ pass
112
+ print(RESET_COLOR) # Reset color at the end
113
+ return self.buffer
114
+
115
+ def set_timing(self, prefill_time, inference_time, context_pos):
116
+ """Set timing information."""
117
+ self.prefill_time = prefill_time
118
+ self.inference_time = inference_time
119
+ self.context_pos = context_pos
120
+
121
+ def parse_model_path(path):
122
+ """Parse model path and return full path with .mlmodelc or .mlpackage extension."""
123
+ path = Path(path)
124
+
125
+ # If path exists exactly as specified, return it
126
+ if path.exists():
127
+ return str(path)
128
+
129
+ # Try with both extensions
130
+ candidates = [
131
+ path, # Original path
132
+ path.with_suffix('.mlmodelc'), # With .mlmodelc
133
+ path.with_suffix('.mlpackage'), # With .mlpackage
134
+ Path(str(path) + '.mlmodelc'), # Handle case where extension is included
135
+ Path(str(path) + '.mlpackage')
136
+ ]
137
+
138
+ # Try all possible paths
139
+ for candidate in candidates:
140
+ if candidate.exists():
141
+ print(f"Found model at: {candidate}")
142
+ return str(candidate)
143
+
144
+ # If embeddings with LUT suffix not found, try without LUT suffix
145
+ if "_lut" in str(path) and "embeddings" in str(path):
146
+ print(f"Failed to find {path}, trying without LUT suffix...")
147
+ # Remove LUT suffix
148
+ path_no_lut = str(path).split("_lut")[0]
149
+ path_no_lut = Path(path_no_lut)
150
+
151
+ # Try candidates without LUT suffix
152
+ candidates_no_lut = [
153
+ path_no_lut,
154
+ path_no_lut.with_suffix('.mlmodelc'),
155
+ path_no_lut.with_suffix('.mlpackage'),
156
+ Path(str(path_no_lut) + '.mlmodelc'),
157
+ Path(str(path_no_lut) + '.mlpackage')
158
+ ]
159
+
160
+ for candidate in candidates_no_lut:
161
+ if candidate.exists():
162
+ print(f"Found model at: {candidate}")
163
+ return str(candidate)
164
+
165
+ # Add no-LUT candidates to the list for error reporting
166
+ candidates.extend(candidates_no_lut)
167
+
168
+ # If we get here, no valid path was found
169
+ print("\nError: Model not found. Tried following paths:")
170
+ for candidate in candidates:
171
+ print(f" {candidate}")
172
+ raise FileNotFoundError(f"Model not found: {path}")
173
+
174
+ def parse_ffn_filename(path):
175
+ """Parse FFN model filename to extract chunk information."""
176
+ path = Path(path)
177
+ pattern = r'FFN_PF.*_chunk_(\d+)of(\d+)'
178
+ match = re.search(pattern, path.name)
179
+
180
+ if match:
181
+ current_chunk = int(match.group(1))
182
+ total_chunks = int(match.group(2))
183
+ return current_chunk, total_chunks
184
+ return None, None
185
+
186
+ def find_all_chunks(base_path):
187
+ """Find all chunk files matching the base FFN path pattern."""
188
+ path = Path(base_path)
189
+ pattern = re.sub(r'_chunk_\d+of\d+', '_chunk_*', str(path))
190
+ return sorted(glob.glob(pattern))
191
+
192
+ def load_model(path, function_name=None):
193
+ """Load a CoreML model, handling both .mlmodelc and .mlpackage formats."""
194
+ path = Path(path)
195
+ compute_unit = ct.ComputeUnit.CPU_AND_NE
196
+
197
+ try:
198
+ if path.suffix == '.mlmodelc':
199
+ # For compiled models (.mlmodelc), use CompiledMLModel
200
+ if function_name:
201
+ return ct.models.CompiledMLModel(str(path), compute_unit, function_name=function_name)
202
+ else:
203
+ return ct.models.CompiledMLModel(str(path), compute_unit)
204
+ else:
205
+ # For packages (.mlpackage)
206
+ if function_name:
207
+ return ct.models.MLModel(str(path), function_name=function_name)
208
+ else:
209
+ return ct.models.MLModel(str(path))
210
+
211
+ except RuntimeError as e:
212
+ if "valid manifest does not exist" in str(e):
213
+ print(f"\nError: Could not load compiled model at {path}")
214
+ print("This might be because:")
215
+ print("1. The model is not properly compiled")
216
+ print("2. The model was compiled for a different OS version")
217
+ print("3. The model needs to be recompiled")
218
+ print("\nTry using the .mlpackage version instead, or recompile the model.")
219
+ raise
220
+
221
+ def parse_args():
222
+ parser = argparse.ArgumentParser(description='Full Chat with CoreML LLaMA with context window shifting, gil resolved (c) 2025 Anemll')
223
+
224
+ # Add meta.yaml option
225
+ parser.add_argument('--meta', type=str, help='Path to meta.yaml to load all parameters')
226
+
227
+ # Add existing arguments
228
+ parser.add_argument('--d', '--dir', type=str, default='.',
229
+ help='Directory containing model files (default: current directory)')
230
+ parser.add_argument('--embed', type=str, required=False,
231
+ help='Path to embeddings model (relative to --dir)')
232
+ parser.add_argument('--ffn', type=str, required=False,
233
+ help='Path to FFN model (can be chunked, relative to --dir)')
234
+ parser.add_argument('--lmhead', type=str, required=False,
235
+ help='Path to LM head model (relative to --dir)')
236
+ parser.add_argument('--tokenizer', type=str, required=False,
237
+ help='Path to tokenizer')
238
+
239
+ # Add new argument for auto-generation
240
+ parser.add_argument('--prompt', type=str,
241
+ help='If specified, run once with this prompt and exit')
242
+
243
+ # Add no-warmup flag
244
+ parser.add_argument('--nw', action='store_true',
245
+ help='Skip warmup phase')
246
+
247
+ # Add debug level
248
+ parser.add_argument('--debug-level', type=int, default=0,
249
+ help='Debug level (0=none, 1=print prompts, 2=more verbose)')
250
+
251
+ # Model configuration
252
+ parser.add_argument('--context-length', type=int,
253
+ help='Context length for the model (default: 512), if not provided, it will be detected from the model directory name ctxNUMBER')
254
+ parser.add_argument('--batch-size', type=int,
255
+ help='Batch size for prefill (default: 64)')
256
+
257
+ args = parser.parse_args()
258
+
259
+ # If meta.yaml is provided, load parameters from it
260
+ if args.meta:
261
+ try:
262
+ with open(args.meta, 'r') as f:
263
+ meta = yaml.safe_load(f)
264
+ params = meta['model_info']['parameters']
265
+
266
+ # Set model directory to meta.yaml directory if not specified
267
+ if not args.d or args.d == '.':
268
+ args.d = str(Path(args.meta).parent)
269
+
270
+ # Build model paths based on parameters
271
+ prefix = params.get('model_prefix', 'llama') # Default to 'llama' if not specified
272
+ lut_ffn = f"_lut{params['lut_ffn']}" if params['lut_ffn'] != 'none' else ''
273
+ lut_lmhead = f"_lut{params['lut_lmhead']}" if params['lut_lmhead'] != 'none' else ''
274
+ lut_embeddings = f"_lut{params['lut_embeddings']}" if params['lut_embeddings'] != 'none' else ''
275
+ num_chunks = int(params['num_chunks'])
276
+
277
+ # Set model paths if not specified
278
+ if not args.lmhead:
279
+ args.lmhead = f'{prefix}_lm_head{lut_lmhead}'
280
+ if not args.embed:
281
+ args.embed = f'{prefix}_embeddings{lut_embeddings}' # Changed from lm_head to embeddings
282
+ if not args.ffn:
283
+ args.ffn = f'{prefix}_FFN_PF{lut_ffn}_chunk_01of{num_chunks:02d}'
284
+ if not args.tokenizer:
285
+ args.tokenizer = args.d
286
+
287
+ # Set other parameters if not overridden by command line
288
+ if args.context_length is None:
289
+ args.context_length = int(params['context_length'])
290
+ if args.batch_size is None:
291
+ args.batch_size = int(params['batch_size'])
292
+ args.num_chunks = num_chunks
293
+
294
+ # Parse split_lm_head parameter from meta.yaml
295
+ if 'split_lm_head' in params:
296
+ args.split_lm_head = int(params['split_lm_head'])
297
+ else:
298
+ args.split_lm_head = 8 # Default value
299
+
300
+ print(f"\nLoaded parameters from {args.meta}:")
301
+ print(f" Context Length: {args.context_length}")
302
+ print(f" Batch Size: {args.batch_size}")
303
+ print(f" Num Chunks: {args.num_chunks}")
304
+ print(f" Split LM Head: {args.split_lm_head}")
305
+ print(f" Models Directory: {args.d}")
306
+ print(f" Embeddings: {args.embed}")
307
+ print(f" LM Head: {args.lmhead}")
308
+ print(f" FFN: {args.ffn}")
309
+
310
+ except Exception as e:
311
+ print(f"\nError loading meta.yaml: {str(e)}")
312
+ sys.exit(1)
313
+
314
+ return args
315
+
316
+ def load_metadata(model,args):
317
+ # Extract metadata and config parameters
318
+ metadata = {}
319
+ if hasattr(model, 'user_defined_metadata'):
320
+ meta = model.user_defined_metadata
321
+
322
+ # Extract key parameters with defaults
323
+ metadata['context_length'] = int(meta.get('com.anemll.context_length', 512))
324
+ metadata['state_length'] = int(meta.get('com.anemll.state_length', metadata['context_length'])) # Added state_length
325
+ metadata['batch_size'] = int(meta.get('com.anemll.batch_size', 64))
326
+ metadata['lut_bits'] = int(meta.get('com.anemll.lut_bits', 0))
327
+ metadata['num_chunks'] = int(meta.get('com.anemll.num_chunks', 1))
328
+
329
+ print("\nExtracted Parameters:")
330
+ print(f" Context Length: {metadata['context_length']}")
331
+ print(f" State Length: {metadata['state_length']}")
332
+ print(f" Prefill Batch Size: {metadata['batch_size']}")
333
+ print(f" LUT Bits: {metadata['lut_bits']}")
334
+ print(f" Number of Chunks: {metadata['num_chunks']}")
335
+
336
+ # Print model info
337
+ print("\nModel Info:")
338
+ if 'com.anemll.info' in meta:
339
+ print(f" {meta['com.anemll.info']}")
340
+ if 'com.github.apple.coremltools.version' in meta:
341
+ print(f" CoreML Tools: {meta['com.github.apple.coremltools.version']}")
342
+
343
+ # Print model input/output shapes
344
+ print("\nModel Shapes:")
345
+ if hasattr(model, 'input_description'):
346
+ print(" Inputs:")
347
+ try:
348
+ if hasattr(model.input_description, 'items'):
349
+ for name, desc in model.input_description.items():
350
+ print(f" {name}: {desc}")
351
+ else:
352
+ print(f" {model.input_description}")
353
+ except:
354
+ print(f" Input description: {type(model.input_description)}")
355
+ if hasattr(model, 'output_description'):
356
+ print(" Outputs:")
357
+ try:
358
+ if hasattr(model.output_description, 'items'):
359
+ for name, desc in model.output_description.items():
360
+ print(f" {name}: {desc}")
361
+ else:
362
+ print(f" {model.output_description}")
363
+ except:
364
+ print(f" Output description: {type(model.output_description)}")
365
+ else:
366
+ print("\nWarning: No metadata found in model")
367
+
368
+ # Check if model directory name contains context length pattern (ctxXXX)
369
+ ctx_len = 512
370
+ if args.context_length is None:
371
+ import re
372
+ ctx_match = re.search(r'ctx(\d+)', str(args.d))
373
+ if ctx_match:
374
+ ctx_len0 = int(ctx_match.group(1))
375
+ if 512 <= ctx_len0 <= 8096:
376
+ ctx_len = ctx_len0
377
+ print(f"\nDetected context length {ctx_len} from directory name")
378
+ else:
379
+ print(f"\nWarning: No context length found in directory {ctx_len} from directory name {args.d}")
380
+ else:
381
+ ctx_len = args.context_length
382
+
383
+ # Use defaults or values from args
384
+ metadata['context_length'] = ctx_len
385
+ metadata['state_length'] = ctx_len
386
+ # Get batch size from args or use default
387
+ metadata['batch_size'] = getattr(args, 'batch_size', 64)
388
+ metadata['lut_bits'] = 4
389
+ metadata['num_chunks'] = getattr(args, 'num_chunks', 4)
390
+ print("\nUsing parameters:")
391
+ print(f" Context Length: {metadata['context_length']}")
392
+ print(f" State Length: {metadata['state_length']}")
393
+ print(f" Prefill Batch Size: {metadata['batch_size']}")
394
+ print(f" LUT Bits: {metadata['lut_bits']}")
395
+ print(f" Number of Chunks: {metadata['num_chunks']}")
396
+
397
+ # Override with values from args if they exist
398
+ if hasattr(args, 'batch_size') and args.batch_size is not None:
399
+ metadata['batch_size'] = args.batch_size
400
+ print(f"\nOverriding batch size from args: {args.batch_size}")
401
+ if hasattr(args, 'num_chunks') and args.num_chunks is not None:
402
+ metadata['num_chunks'] = args.num_chunks
403
+ print(f"\nOverriding num chunks from args: {args.num_chunks}")
404
+
405
+ return metadata
406
+
407
+ def load_models(args,metadata):
408
+ """Load all required models and extract metadata."""
409
+ print("\nLoading models...")
410
+
411
+ try:
412
+ # Load embeddings model
413
+ print("\nLoading embeddings model...")
414
+ embed_path = parse_model_path(args.embed)
415
+ print(f"Loading from: {embed_path}")
416
+ embed_model = load_model(embed_path)
417
+ print("Embeddings model loaded successfully")
418
+ metadata = load_metadata(embed_model,args)
419
+
420
+
421
+
422
+ # Load LM head model
423
+ print("\nLoading LM head model...")
424
+ lmhead_path = parse_model_path(args.lmhead)
425
+ print(f"Loading from: {lmhead_path}")
426
+ lmhead_model = load_model(lmhead_path)
427
+ print("LM head model loaded successfully")
428
+
429
+ # Parse FFN path and find chunks if needed
430
+ print("\nLoading FFN+PREFILL model(s)...")
431
+ ffn_path = parse_model_path(args.ffn)
432
+ chunk_no, total_chunks = parse_ffn_filename(ffn_path)
433
+
434
+ ffn_models = []
435
+ if chunk_no and total_chunks:
436
+ print(f"\nDetected chunked FFN+PREFILL model ({total_chunks} chunks)")
437
+ # Find and load all chunks
438
+ chunk_paths = find_all_chunks(ffn_path)
439
+ if len(chunk_paths) != total_chunks:
440
+ raise ValueError(f"Found {len(chunk_paths)} chunks but filename indicates {total_chunks} chunks")
441
+
442
+ for chunk_path in chunk_paths:
443
+ print(f"\nLoading FFN+PREFILL chunk: {Path(chunk_path).name}")
444
+ try:
445
+ # For chunked models, we need both infer and prefill functions
446
+ ffn_models.append({
447
+ 'infer': load_model(chunk_path, function_name='infer'),
448
+ 'prefill': load_model(chunk_path, function_name='prefill')
449
+ })
450
+ print("Chunk loaded successfully")
451
+ except Exception as e:
452
+ print(f"Error loading chunk {chunk_path}: {str(e)}")
453
+ raise
454
+ metadata = load_metadata(ffn_models[0],args)
455
+
456
+ else:
457
+ print("\nLoading single FFN model...")
458
+ ffn_models.append(load_model(ffn_path))
459
+ print("FFN model loaded successfully")
460
+
461
+ return embed_model, ffn_models, lmhead_model, metadata
462
+
463
+ except Exception as e:
464
+ print(f"\nError loading models: {str(e)}")
465
+ print("\nPlease ensure all model files exist and are accessible.")
466
+ print("Expected files:")
467
+ print(f" Embeddings: {args.embed}")
468
+ print(f" LM Head: {args.lmhead}")
469
+ print(f" FFN: {args.ffn}")
470
+ raise
471
+
472
+ # At the top of the file, make this a default path
473
+
474
+ def initialize_tokenizer(model_path=None):
475
+ """Initialize and configure the tokenizer."""
476
+ try:
477
+
478
+
479
+ tokenizer = AutoTokenizer.from_pretrained(
480
+ str(model_path),
481
+ use_fast=False,
482
+ trust_remote_code=True
483
+ )
484
+
485
+ print("\nTokenizer Configuration:")
486
+ print(f"Tokenizer type: {type(tokenizer)}")
487
+ print(f"Tokenizer name: {tokenizer.__class__.__name__}")
488
+ print(f"Vocabulary size: {len(tokenizer)}")
489
+ print(f"Model max length: {tokenizer.model_max_length}")
490
+
491
+ if tokenizer.pad_token is None:
492
+ tokenizer.pad_token = tokenizer.eos_token
493
+ tokenizer.pad_token_id = tokenizer.eos_token_id
494
+ print("Set PAD token to EOS token")
495
+
496
+ tokenizer.padding_side = "left"
497
+
498
+ print(f"\nSpecial Tokens:")
499
+ print(f"PAD token: '{tokenizer.pad_token}' (ID: {tokenizer.pad_token_id})")
500
+ print(f"EOS token: '{tokenizer.eos_token}' (ID: {tokenizer.eos_token_id})")
501
+ print(f"BOS token: '{tokenizer.bos_token}' (ID: {tokenizer.bos_token_id})")
502
+ print(f"UNK token: '{tokenizer.unk_token}' (ID: {tokenizer.unk_token_id})")
503
+
504
+ return tokenizer
505
+
506
+ except Exception as e:
507
+ print(f"\nError: Failed to load tokenizer from {model_path}")
508
+ print(f"Error details: {str(e)}")
509
+ print(f"Error type: {type(e)}")
510
+ print("\nThis code requires a Llama 3.2 model for chat template functionality.")
511
+ print("Please provide the path to a Llama 3.2 model directory.")
512
+ import traceback
513
+ traceback.print_exc()
514
+ raise
515
+
516
+
517
+
518
+ def make_causal_mask(length, start):
519
+ """Create causal attention mask."""
520
+ mask = np.full((1, 1, length, length), -np.inf, dtype=np.float16)
521
+ row_indices = np.arange(length).reshape(length, 1)
522
+ col_indices = np.arange(length).reshape(1, length)
523
+ mask[:, :, col_indices <= (row_indices + start)] = 0
524
+ return mask
525
+
526
+ def run_prefill(embed_model, ffn_models, input_ids, current_pos, context_length, batch_size, state, causal_mask):
527
+ """Run prefill on the input sequence."""
528
+ #print(f"[DEBUG] Running prefill from 0 to {current_pos}")
529
+
530
+ # Process in batches
531
+ batch_pos = 0
532
+ while batch_pos < current_pos:
533
+ batch_end = min(batch_pos + batch_size, current_pos)
534
+ current_batch_size = batch_end - batch_pos
535
+
536
+ #print(f"[DEBUG] Prefill batch {batch_pos}-{batch_end} (size={current_batch_size})")
537
+
538
+ # Get current batch
539
+ batch_input = input_ids[:, batch_pos:batch_end]
540
+
541
+ # Pad to full batch size
542
+ batch_input = F.pad(
543
+ batch_input,
544
+ (0, batch_size - current_batch_size),
545
+ value=0
546
+ )
547
+
548
+ # Generate position IDs for this batch
549
+ position_ids = torch.arange(batch_pos, batch_pos + batch_size, dtype=torch.int32)
550
+
551
+ # Use the pre-initialized causal mask and extract the batch portion
552
+ batch_causal_mask = causal_mask[:, :, batch_pos:batch_pos + batch_size, :]
553
+
554
+ # Run embeddings
555
+ hidden_states = torch.from_numpy(
556
+ embed_model.predict({'input_ids': batch_input.numpy()})['hidden_states']
557
+ )
558
+
559
+ # Run through FFN chunks
560
+ for ffn_model in ffn_models:
561
+ if isinstance(ffn_model, dict):
562
+ inputs = {
563
+ 'hidden_states': hidden_states.numpy(),
564
+ 'position_ids': position_ids.numpy(),
565
+ 'causal_mask': batch_causal_mask.numpy(),
566
+ 'current_pos': np.array([batch_pos], dtype=np.int32)
567
+ }
568
+ output = ffn_model['prefill'].predict(inputs, state)
569
+ hidden_states = torch.from_numpy(output['output_hidden_states'])
570
+
571
+ batch_pos = batch_end
572
+
573
+ return torch.tensor([current_pos], dtype=torch.int32)
574
+
575
+ def generate_next_token(embed_model, ffn_models, lmhead_model, input_ids, pos, context_length, state, causal_mask, metadata=None, temperature=0.0):
576
+ """Generate the next token."""
577
+ # Get current token
578
+ current_token = input_ids[:, pos-1:pos]
579
+
580
+ # Run embeddings
581
+ hidden_states = torch.from_numpy(
582
+ embed_model.predict({'input_ids': current_token.numpy()})['hidden_states']
583
+ )
584
+
585
+ # Create masks
586
+ update_mask = torch.zeros((1, 1, context_length, 1), dtype=torch.float16)
587
+ update_mask[0, 0, pos-1, 0] = 1.0
588
+ position_ids = torch.tensor([pos-1], dtype=torch.int32)
589
+
590
+ # Use the pre-initialized causal mask and extract the single position portion
591
+ single_causal_mask = causal_mask[:, :, pos-1:pos, :]
592
+
593
+ # Run through FFN chunks
594
+ for ffn_model in ffn_models:
595
+ if isinstance(ffn_model, dict):
596
+ inputs = {
597
+ 'hidden_states': hidden_states.numpy(),
598
+ 'update_mask': update_mask.numpy(),
599
+ 'position_ids': position_ids.numpy(),
600
+ 'causal_mask': single_causal_mask.numpy(),
601
+ 'current_pos': position_ids.numpy()
602
+ }
603
+ output = ffn_model['infer'].predict(inputs, state)
604
+ hidden_states = torch.from_numpy(output['output_hidden_states'])
605
+
606
+ # Run LM head and get next token
607
+ lm_output = lmhead_model.predict({'hidden_states': hidden_states.numpy()})
608
+
609
+ if 'logits1' in lm_output:
610
+ logits_parts = []
611
+ for i in range(1, metadata.get('split_lm_head', 8) + 1):
612
+ key = f'logits{i}'
613
+ if key in lm_output:
614
+ logits_parts.append(torch.from_numpy(lm_output[key]))
615
+ logits = torch.cat(logits_parts, dim=-1)
616
+ else:
617
+ logits = torch.from_numpy(lm_output['output_logits'])
618
+
619
+ if temperature > 0:
620
+ logits = logits / temperature
621
+ probs = F.softmax(logits[0, -1, :], dim=-1)
622
+ next_token = torch.multinomial(probs, num_samples=1).item()
623
+ else:
624
+ next_token = torch.argmax(logits[0, -1, :]).item()
625
+
626
+ return next_token
627
+
628
+ def create_unified_state(ffn_models, context_length):
629
+ """Create unified KV cache state for transformer."""
630
+ if isinstance(ffn_models[0], dict):
631
+ # Use first FFN model's prefill function to create state
632
+ state = ffn_models[0]['prefill'].make_state()
633
+ print(f"\nCreated unified transformer state for {len(ffn_models)} chunks")
634
+ return state
635
+ else:
636
+ state = ffn_models[0].make_state()
637
+ print("\nCreated unified transformer state")
638
+ return state
639
+
640
+ def initialize_causal_mask(context_length):
641
+ """Initialize causal mask for transformer attention."""
642
+ causal_mask = make_causal_mask(context_length, 0)
643
+ causal_mask = torch.tensor(causal_mask, dtype=torch.float16)
644
+ print(f"\nInitialized causal mask for context length {context_length}")
645
+ return causal_mask
646
+
647
+ def get_user_input():
648
+ """Get input from user, handling special key combinations."""
649
+ global THINKING_MODE
650
+ try:
651
+ import termios
652
+ import tty
653
+ import sys
654
+
655
+ def _getch():
656
+ fd = sys.stdin.fileno()
657
+ old_settings = termios.tcgetattr(fd)
658
+ try:
659
+ tty.setraw(sys.stdin.fileno())
660
+ ch = sys.stdin.read(1)
661
+ finally:
662
+ termios.tcsetattr(fd, termios.TCSADRAIN, old_settings)
663
+ return ch
664
+
665
+ buffer = []
666
+ while True:
667
+ char = _getch()
668
+
669
+ # Debug: print the character code
670
+ print(f"\nKey pressed: {repr(char)} (hex: {hex(ord(char))})")
671
+
672
+ # Check for Enter key
673
+ if char == '\r' or char == '\n':
674
+ print() # Move to next line
675
+ input_text = ''.join(buffer)
676
+ # Check if the command is /t
677
+ if input_text == '/t':
678
+ THINKING_MODE = not THINKING_MODE
679
+ print(f"Thinking mode {'ON' if THINKING_MODE else 'OFF'}")
680
+ buffer = [] # Clear buffer
681
+ print(f"\n{LIGHT_GREEN}You{' (thinking)' if THINKING_MODE else ''}:{RESET_COLOR}", end=' ', flush=True)
682
+ continue
683
+ return input_text
684
+
685
+ # Handle backspace
686
+ if char == '\x7f': # backspace
687
+ if buffer:
688
+ buffer.pop()
689
+ sys.stdout.write('\b \b') # Erase character
690
+ sys.stdout.flush()
691
+ continue
692
+
693
+ # Handle Ctrl-C
694
+ if char == '\x03': # Ctrl-C
695
+ print("^C")
696
+ raise KeyboardInterrupt
697
+
698
+ # Print character and add to buffer
699
+ sys.stdout.write(char)
700
+ sys.stdout.flush()
701
+ buffer.append(char)
702
+
703
+ except ImportError:
704
+ # Fallback for systems without termios
705
+ return input("> ")
706
+
707
+ def chat_loop(embed_model, ffn_models, lmhead_model, tokenizer, metadata, state, causal_mask, auto_prompt=None, warmup=False):
708
+ """Interactive chat loop."""
709
+ global THINKING_MODE
710
+ global DEBUG_LEVEL
711
+ context_length = metadata.get('context_length')
712
+ batch_size = metadata.get('batch_size', 64)
713
+
714
+ if not warmup:
715
+ print(f"\nUsing context length: {context_length}")
716
+ print("\nStarting chat session. Press Ctrl+D to exit.")
717
+ print("Type your message and press Enter to chat. Use /t to toggle thinking mode.")
718
+ print(f"Thinking mode is {'ON' if THINKING_MODE else 'OFF'}")
719
+
720
+ # Keep track of conversation history
721
+ conversation = []
722
+
723
+ try:
724
+ while True:
725
+ try:
726
+ if not warmup:
727
+ print(f"\n{LIGHT_GREEN}You{' (thinking)' if THINKING_MODE else ''}:{RESET_COLOR}", end=' ', flush=True)
728
+ if auto_prompt is not None:
729
+ user_input = auto_prompt
730
+ if not warmup:
731
+ print(user_input)
732
+ else:
733
+ user_input = input().strip()
734
+ except EOFError:
735
+ if not warmup:
736
+ print("\nExiting chat...")
737
+ break
738
+
739
+ if not user_input:
740
+ continue
741
+
742
+ # Handle /t command
743
+ if user_input == "/t":
744
+ THINKING_MODE = not THINKING_MODE
745
+ print(f"Thinking mode {'ON' if THINKING_MODE else 'OFF'}")
746
+ continue
747
+
748
+ # Add user message to conversation
749
+ conversation.append({"role": "user", "content": user_input})
750
+
751
+ # Format using chat template with full history
752
+ if THINKING_MODE:
753
+ # Add thinking prompt to system message
754
+ conversation_with_thinking = [{"role": "system", "content": THINKING_PROMPT}] + conversation
755
+ base_input_ids = tokenizer.apply_chat_template(
756
+ conversation_with_thinking,
757
+ return_tensors="pt",
758
+ add_generation_prompt=True
759
+ ).to(torch.int32)
760
+
761
+ # Print full prompt if debug level >= 1
762
+ if DEBUG_LEVEL >= 1 and not warmup:
763
+ print(f"\n{DARK_BLUE}Debug: Full prompt with thinking:{RESET_COLOR}")
764
+ print(tokenizer.decode(base_input_ids[0]))
765
+ else:
766
+ base_input_ids = tokenizer.apply_chat_template(
767
+ conversation,
768
+ return_tensors="pt",
769
+ add_generation_prompt=True
770
+ ).to(torch.int32)
771
+
772
+ # Print full prompt if debug level >= 1
773
+ if DEBUG_LEVEL >= 1 and not warmup:
774
+ print(f"\n{DARK_BLUE}Debug: Full prompt:{RESET_COLOR}")
775
+ print(tokenizer.decode(base_input_ids[0]))
776
+
777
+ # Check if we need to trim history
778
+ while base_input_ids.size(1) > context_length - 100: # Leave room for response
779
+ # Remove oldest message pair (user + assistant)
780
+ if len(conversation) > 2:
781
+ conversation = conversation[2:] # Remove oldest pair
782
+ base_input_ids = tokenizer.apply_chat_template(
783
+ conversation,
784
+ return_tensors="pt",
785
+ add_generation_prompt=True
786
+ ).to(torch.int32)
787
+ else:
788
+ # If only current message remains and still too long, truncate
789
+ base_input_ids = base_input_ids[:, -context_length//2:]
790
+ break
791
+
792
+ context_pos = base_input_ids.size(1)
793
+
794
+ # Pad sequence to context_size
795
+ input_ids = F.pad(
796
+ base_input_ids,
797
+ (0, context_length - context_pos),
798
+ value=0
799
+ )
800
+
801
+ if not warmup:
802
+ print(f"\n{LIGHT_BLUE}Assistant:{RESET_COLOR}", end=' ', flush=True)
803
+
804
+ # split_lm_head should already be in metadata from caller
805
+
806
+ # Initialize token printer and collect response
807
+ token_printer = TokenPrinter(tokenizer)
808
+ response_tokens = []
809
+ generation_start_time = time.time()
810
+
811
+ try:
812
+ # Run prefill on entire context
813
+ current_pos = run_prefill(
814
+ embed_model,
815
+ ffn_models,
816
+ input_ids,
817
+ context_pos,
818
+ context_length,
819
+ batch_size,
820
+ state,
821
+ causal_mask
822
+ )
823
+ #print(f"\n[DEBUG] After initial prefill - current_pos: {current_pos}")
824
+
825
+ # Generation loop
826
+ pos = context_pos
827
+ tokens_generated = 0
828
+ inference_start = time.time() # Start inference timing
829
+
830
+ while True:
831
+ # Check if we need to shift window
832
+ if pos >= context_length - 2:
833
+ # Calculate shift to maintain full batches
834
+ batch_size = metadata.get('batch_size', 64)
835
+ # Calculate max batches that fit in context
836
+ max_batches = context_length // batch_size
837
+ desired_batches = max(1, max_batches - 2) # Leave room for new tokens
838
+ new_size = min(desired_batches * batch_size, context_length - batch_size)
839
+
840
+ # Create shifted input_ids
841
+ tmp = torch.zeros((1, context_length), dtype=torch.int32)
842
+ tmp[:,0:new_size] = input_ids[:,pos-new_size:pos]
843
+ input_ids = tmp
844
+
845
+ # Reset state and run prefill
846
+ # keep the same state
847
+ #state = create_unified_state(ffn_models, context_length)
848
+ current_pos = run_prefill(
849
+ embed_model,
850
+ ffn_models,
851
+ input_ids,
852
+ new_size, # Prefill the entire shifted content
853
+ context_length,
854
+ batch_size,
855
+ state,
856
+ causal_mask
857
+ )
858
+
859
+ # Start generating from the next position
860
+ pos = new_size # Don't back up, continue from where we left off
861
+
862
+ #print(f"\n[DEBUG] After shift - next token will be at pos {pos}")
863
+ #print(f"[DEBUG] Context before next token: {tokenizer.decode(input_ids[0, pos-40:pos])}")
864
+
865
+ window_shifted = True
866
+
867
+ # Generate next token
868
+ next_token = generate_next_token(
869
+ embed_model,
870
+ ffn_models,
871
+ lmhead_model,
872
+ input_ids,
873
+ pos,
874
+ context_length,
875
+ state,
876
+ causal_mask,
877
+ metadata
878
+ )
879
+
880
+ # Add token
881
+ input_ids[0, pos] = next_token
882
+ if not warmup:
883
+ token_printer.add_token(next_token)
884
+ token_printer.drain_buffer()
885
+ response_tokens.append(next_token)
886
+
887
+ pos += 1
888
+ tokens_generated += 1
889
+
890
+ # In warmup mode, limit tokens
891
+ if warmup and tokens_generated >= WARMUP_TOKEN_LIMIT:
892
+ break
893
+
894
+ # Check for all possible EOS tokens
895
+ eos_token_ids = tokenizer.eos_token_id
896
+ if isinstance(eos_token_ids, list):
897
+ if next_token in eos_token_ids:
898
+ break
899
+ else:
900
+ if next_token == eos_token_ids:
901
+ break
902
+
903
+ inference_time = time.time() - inference_start # Calculate inference time
904
+
905
+ # Add assistant response to conversation
906
+ response_text = token_printer.stop()
907
+ conversation.append({"role": "assistant", "content": response_text})
908
+
909
+ # Print stats only if not in warmup
910
+ if not warmup:
911
+ total_time = time.time() - generation_start_time
912
+ prefill_time = total_time - inference_time
913
+ inference_tokens_per_sec = len(response_tokens) / inference_time if inference_time > 0 else 0
914
+ prefill_ms = prefill_time * 1000
915
+ prefill_tokens_per_sec = context_pos / prefill_time if prefill_time > 0 else 0
916
+ print(f"{DARK_BLUE}{inference_tokens_per_sec:.1f} t/s, "
917
+ f"TTFT: {prefill_ms:.1f}ms ({prefill_tokens_per_sec:.1f} t/s), "
918
+ f"{len(response_tokens)} tokens{RESET_COLOR}")
919
+
920
+ if auto_prompt is not None:
921
+ break
922
+
923
+ except KeyboardInterrupt:
924
+ if not warmup:
925
+ print("\nGeneration interrupted")
926
+ token_printer.stop()
927
+ continue
928
+
929
+ except Exception as e:
930
+ if not warmup:
931
+ print(f"\nError in chat loop: {str(e)}")
932
+ import traceback
933
+ traceback.print_exc()
934
+
935
+ def main():
936
+ args = parse_args()
937
+ global DEBUG_LEVEL
938
+ DEBUG_LEVEL = args.debug_level
939
+
940
+ # Convert directory to absolute path
941
+ model_dir = Path(args.d).resolve()
942
+ if not model_dir.exists():
943
+ print(f"\nError: Model directory not found: {model_dir}")
944
+ return 1
945
+
946
+ print(f"\nUsing model directory: {model_dir}")
947
+ print(f"Context length: {args.context_length}")
948
+
949
+ try:
950
+ # Update paths to be relative to model directory
951
+ args.embed = str(model_dir / args.embed)
952
+ args.ffn = str(model_dir / args.ffn)
953
+ args.lmhead = str(model_dir / args.lmhead)
954
+
955
+ # Handle tokenizer path separately since it's not relative to model_dir
956
+ if args.tokenizer is None:
957
+ args.tokenizer = str(model_dir)
958
+
959
+ if not Path(args.tokenizer).exists():
960
+ print(f"\nError: Tokenizer directory not found: {args.tokenizer}")
961
+ return 1
962
+
963
+ args.tokenizer = str(Path(args.tokenizer).resolve()) # Convert to absolute path
964
+ print(f"Using tokenizer path: {args.tokenizer}")
965
+
966
+ metadata = {}
967
+ # Load models and extract metadata
968
+ embed_model, ffn_models, lmhead_model, metadata = load_models(args,metadata)
969
+
970
+ print(f"\nMetadata befor args.context_length: {metadata}")
971
+
972
+ # Override context length from command line if provided
973
+ if args.context_length is not None:
974
+ metadata['context_length'] = args.context_length
975
+ metadata['state_length'] = args.context_length # Also update state_length
976
+ print(f"\nOverriding context length from command line: {args.context_length}")
977
+
978
+ print(f"\nMetadata after load_models: {metadata}")
979
+
980
+ # Load tokenizer with resolved path
981
+ tokenizer = initialize_tokenizer(args.tokenizer)
982
+ if tokenizer is None:
983
+ raise RuntimeError("Failed to initialize tokenizer")
984
+
985
+ # Create unified state once
986
+ state = create_unified_state(ffn_models, metadata['context_length'])
987
+
988
+ # Initialize causal mask once
989
+ causal_mask = initialize_causal_mask(metadata['context_length'])
990
+
991
+ # Add split_lm_head to metadata for generate_next_token
992
+ metadata['split_lm_head'] = getattr(args, 'split_lm_head', 8)
993
+
994
+ # Warmup runs to prevent Python GIL issues with CoreML !
995
+ if not args.nw:
996
+ for i in range(2):
997
+ chat_loop(
998
+ embed_model=embed_model,
999
+ ffn_models=ffn_models,
1000
+ lmhead_model=lmhead_model,
1001
+ tokenizer=tokenizer,
1002
+ metadata=metadata,
1003
+ state=state, # Pass the state
1004
+ causal_mask=causal_mask, # Pass the causal mask
1005
+ warmup=True,
1006
+ auto_prompt="who are you?"
1007
+ )
1008
+
1009
+ # Main run
1010
+ chat_loop(
1011
+ embed_model=embed_model,
1012
+ ffn_models=ffn_models,
1013
+ lmhead_model=lmhead_model,
1014
+ tokenizer=tokenizer,
1015
+ metadata=metadata,
1016
+ state=state, # Pass the state
1017
+ causal_mask=causal_mask, # Pass the causal mask
1018
+ warmup=False,
1019
+ auto_prompt=args.prompt
1020
+ )
1021
+
1022
+ except Exception as e:
1023
+ print(f"\nError: {str(e)}")
1024
+ import traceback
1025
+ traceback.print_exc()
1026
+ return 1
1027
+
1028
+ return 0
1029
+
1030
+ if __name__ == "__main__":
1031
+ exit(main())
config.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "tokenizer_class": "LlamaTokenizer",
3
+ "model_type": "llama"
4
+ }
llama_FFN_PF_lut6_chunk_01of01.mlmodelc/analytics/coremldata.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f322575c7bdfdf0e57d167501a1dfc49d89dccbb52bfa3fe2b9b37bff43afd1e
3
+ size 243
llama_FFN_PF_lut6_chunk_01of01.mlmodelc/coremldata.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c6b296c4342daa2f024114c60db9e404fc24a25ae2ca14fca3dac199a4be0693
3
+ size 981
llama_FFN_PF_lut6_chunk_01of01.mlmodelc/metadata.json ADDED
@@ -0,0 +1,336 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "metadataOutputVersion" : "3.0",
4
+ "userDefinedMetadata" : {
5
+ "com.anemll.chunk_no" : "1",
6
+ "com.github.apple.coremltools.source_dialect" : "TorchScript",
7
+ "com.github.apple.coremltools.source" : "torch==2.5.0",
8
+ "com.github.apple.coremltools.version" : "8.3.0",
9
+ "com.anemll.context_length" : "1024",
10
+ "com.anemll.num_chunks" : "1",
11
+ "com.anemll.batch_size" : "64",
12
+ "com.anemll.info" : "Converted with Anemll v0.3.3",
13
+ "com.anemll.lut_bits" : "6"
14
+ },
15
+ "availability" : {
16
+ "macOS" : "15.0",
17
+ "tvOS" : "18.0",
18
+ "visionOS" : "2.0",
19
+ "watchOS" : "11.0",
20
+ "iOS" : "18.0",
21
+ "macCatalyst" : "18.0"
22
+ },
23
+ "inputSchema" : [
24
+ {
25
+ "hasShapeFlexibility" : "0",
26
+ "isOptional" : "0",
27
+ "dataType" : "Float16",
28
+ "formattedType" : "MultiArray (Float16 1 × 1 × 2048)",
29
+ "shortDescription" : "",
30
+ "shape" : "[1, 1, 2048]",
31
+ "name" : "hidden_states",
32
+ "type" : "MultiArray"
33
+ },
34
+ {
35
+ "hasShapeFlexibility" : "0",
36
+ "isOptional" : "0",
37
+ "dataType" : "Int32",
38
+ "formattedType" : "MultiArray (Int32 1)",
39
+ "shortDescription" : "",
40
+ "shape" : "[1]",
41
+ "name" : "position_ids",
42
+ "type" : "MultiArray"
43
+ },
44
+ {
45
+ "hasShapeFlexibility" : "0",
46
+ "isOptional" : "0",
47
+ "dataType" : "Float16",
48
+ "formattedType" : "MultiArray (Float16 1 × 1 × 1 × 1024)",
49
+ "shortDescription" : "",
50
+ "shape" : "[1, 1, 1, 1024]",
51
+ "name" : "causal_mask",
52
+ "type" : "MultiArray"
53
+ },
54
+ {
55
+ "hasShapeFlexibility" : "0",
56
+ "isOptional" : "0",
57
+ "dataType" : "Int32",
58
+ "formattedType" : "MultiArray (Int32 1)",
59
+ "shortDescription" : "",
60
+ "shape" : "[1]",
61
+ "name" : "current_pos",
62
+ "type" : "MultiArray"
63
+ }
64
+ ],
65
+ "outputSchema" : [
66
+ {
67
+ "hasShapeFlexibility" : "0",
68
+ "isOptional" : "0",
69
+ "dataType" : "Float16",
70
+ "formattedType" : "MultiArray (Float16 1 × 1 × 2048)",
71
+ "shortDescription" : "",
72
+ "shape" : "[1, 1, 2048]",
73
+ "name" : "output_hidden_states",
74
+ "type" : "MultiArray"
75
+ }
76
+ ],
77
+ "modelParameters" : [
78
+
79
+ ],
80
+ "storagePrecision" : "Mixed (Float16, Palettized (12 bits), Palettized (14 bits), Palettized (16 bits), UInt6)",
81
+ "method" : "predict",
82
+ "functions" : [
83
+ {
84
+ "inputSchema" : [
85
+ {
86
+ "hasShapeFlexibility" : "0",
87
+ "isOptional" : "0",
88
+ "dataType" : "Float16",
89
+ "formattedType" : "MultiArray (Float16 1 × 1 × 2048)",
90
+ "shortDescription" : "",
91
+ "shape" : "[1, 1, 2048]",
92
+ "name" : "hidden_states",
93
+ "type" : "MultiArray"
94
+ },
95
+ {
96
+ "hasShapeFlexibility" : "0",
97
+ "isOptional" : "0",
98
+ "dataType" : "Int32",
99
+ "formattedType" : "MultiArray (Int32 1)",
100
+ "shortDescription" : "",
101
+ "shape" : "[1]",
102
+ "name" : "position_ids",
103
+ "type" : "MultiArray"
104
+ },
105
+ {
106
+ "hasShapeFlexibility" : "0",
107
+ "isOptional" : "0",
108
+ "dataType" : "Float16",
109
+ "formattedType" : "MultiArray (Float16 1 × 1 × 1 × 1024)",
110
+ "shortDescription" : "",
111
+ "shape" : "[1, 1, 1, 1024]",
112
+ "name" : "causal_mask",
113
+ "type" : "MultiArray"
114
+ },
115
+ {
116
+ "hasShapeFlexibility" : "0",
117
+ "isOptional" : "0",
118
+ "dataType" : "Int32",
119
+ "formattedType" : "MultiArray (Int32 1)",
120
+ "shortDescription" : "",
121
+ "shape" : "[1]",
122
+ "name" : "current_pos",
123
+ "type" : "MultiArray"
124
+ }
125
+ ],
126
+ "computePrecision" : "Mixed (Float16, Int16, Int32, UInt16)",
127
+ "storagePrecision" : "Mixed (Float16, Palettized (12 bits), Palettized (14 bits), Palettized (16 bits), UInt6)",
128
+ "stateSchema" : [
129
+ {
130
+ "dataType" : "Float16",
131
+ "isOptional" : "0",
132
+ "formattedType" : "State (Float16 32 × 8 × 1024 × 64)",
133
+ "shortDescription" : "",
134
+ "shape" : "[32, 8, 1024, 64]",
135
+ "name" : "model_model_kv_cache_0",
136
+ "type" : "State"
137
+ }
138
+ ],
139
+ "outputSchema" : [
140
+ {
141
+ "hasShapeFlexibility" : "0",
142
+ "isOptional" : "0",
143
+ "dataType" : "Float16",
144
+ "formattedType" : "MultiArray (Float16 1 × 1 × 2048)",
145
+ "shortDescription" : "",
146
+ "shape" : "[1, 1, 2048]",
147
+ "name" : "output_hidden_states",
148
+ "type" : "MultiArray"
149
+ }
150
+ ],
151
+ "name" : "infer",
152
+ "mlProgramOperationTypeHistogram" : {
153
+ "Ios18.expandDims" : 64,
154
+ "Ios18.mul" : 226,
155
+ "Ios18.matmul" : 32,
156
+ "Identity" : 1,
157
+ "Ios18.exp" : 16,
158
+ "Ios18.realDiv" : 16,
159
+ "Ios18.greaterEqual" : 1,
160
+ "Select" : 1,
161
+ "Ios18.readState" : 33,
162
+ "Tile" : 32,
163
+ "Ios18.gather" : 2,
164
+ "Ios18.add" : 82,
165
+ "Ios18.layerNorm" : 33,
166
+ "Ios18.sliceUpdate" : 32,
167
+ "Ios18.writeState" : 32,
168
+ "Ios18.reshape" : 98,
169
+ "Ios16.reduceMax" : 16,
170
+ "Ios16.reduceSum" : 16,
171
+ "Ios18.constexprLutToDense" : 112,
172
+ "Ios18.conv" : 96,
173
+ "Ios18.concat" : 129,
174
+ "Ios18.transpose" : 64,
175
+ "Ios18.sub" : 48,
176
+ "Ios18.cast" : 2,
177
+ "Ios18.linear" : 16,
178
+ "Ios18.silu" : 16,
179
+ "Ios18.sliceByIndex" : 131,
180
+ "Ios18.squeeze" : 48
181
+ }
182
+ },
183
+ {
184
+ "inputSchema" : [
185
+ {
186
+ "hasShapeFlexibility" : "0",
187
+ "isOptional" : "0",
188
+ "dataType" : "Float16",
189
+ "formattedType" : "MultiArray (Float16 1 × 64 × 2048)",
190
+ "shortDescription" : "",
191
+ "shape" : "[1, 64, 2048]",
192
+ "name" : "hidden_states",
193
+ "type" : "MultiArray"
194
+ },
195
+ {
196
+ "hasShapeFlexibility" : "0",
197
+ "isOptional" : "0",
198
+ "dataType" : "Int32",
199
+ "formattedType" : "MultiArray (Int32 64)",
200
+ "shortDescription" : "",
201
+ "shape" : "[64]",
202
+ "name" : "position_ids",
203
+ "type" : "MultiArray"
204
+ },
205
+ {
206
+ "hasShapeFlexibility" : "0",
207
+ "isOptional" : "0",
208
+ "dataType" : "Float16",
209
+ "formattedType" : "MultiArray (Float16 1 × 1 × 64 × 1024)",
210
+ "shortDescription" : "",
211
+ "shape" : "[1, 1, 64, 1024]",
212
+ "name" : "causal_mask",
213
+ "type" : "MultiArray"
214
+ },
215
+ {
216
+ "hasShapeFlexibility" : "0",
217
+ "isOptional" : "0",
218
+ "dataType" : "Int32",
219
+ "formattedType" : "MultiArray (Int32 1)",
220
+ "shortDescription" : "",
221
+ "shape" : "[1]",
222
+ "name" : "current_pos",
223
+ "type" : "MultiArray"
224
+ }
225
+ ],
226
+ "computePrecision" : "Mixed (Float16, Int16, Int32, UInt16)",
227
+ "storagePrecision" : "Mixed (Float16, Palettized (12 bits), Palettized (14 bits), Palettized (16 bits), UInt6)",
228
+ "stateSchema" : [
229
+ {
230
+ "dataType" : "Float16",
231
+ "isOptional" : "0",
232
+ "formattedType" : "State (Float16 32 × 8 × 1024 × 64)",
233
+ "shortDescription" : "",
234
+ "shape" : "[32, 8, 1024, 64]",
235
+ "name" : "model_model_kv_cache_0",
236
+ "type" : "State"
237
+ }
238
+ ],
239
+ "outputSchema" : [
240
+ {
241
+ "hasShapeFlexibility" : "0",
242
+ "isOptional" : "0",
243
+ "dataType" : "Float16",
244
+ "formattedType" : "MultiArray (Float16 1 × 1 × 2048)",
245
+ "shortDescription" : "",
246
+ "shape" : "[1, 1, 2048]",
247
+ "name" : "output_hidden_states",
248
+ "type" : "MultiArray"
249
+ }
250
+ ],
251
+ "name" : "prefill",
252
+ "mlProgramOperationTypeHistogram" : {
253
+ "Ios18.expandDims" : 63,
254
+ "Ios18.mul" : 221,
255
+ "Ios18.matmul" : 32,
256
+ "Ios18.exp" : 16,
257
+ "Ios18.realDiv" : 16,
258
+ "Ios18.greaterEqual" : 1,
259
+ "Select" : 1,
260
+ "Ios18.readState" : 33,
261
+ "Tile" : 32,
262
+ "Ios18.gather" : 2,
263
+ "Ios18.add" : 81,
264
+ "Ios18.layerNorm" : 31,
265
+ "Ios18.sliceUpdate" : 32,
266
+ "Ios18.writeState" : 32,
267
+ "Ios18.reshape" : 130,
268
+ "Ios16.reduceMax" : 16,
269
+ "Ios16.reduceSum" : 16,
270
+ "Ios18.constexprLutToDense" : 109,
271
+ "Ios18.conv" : 93,
272
+ "Ios18.concat" : 127,
273
+ "Ios18.transpose" : 112,
274
+ "Ios18.sub" : 48,
275
+ "Ios18.cast" : 2,
276
+ "Ios18.linear" : 16,
277
+ "Ios18.silu" : 15,
278
+ "Ios18.sliceByIndex" : 130,
279
+ "Ios18.squeeze" : 47
280
+ }
281
+ }
282
+ ],
283
+ "version" : "0.3.3",
284
+ "isUpdatable" : "0",
285
+ "defaultFunctionName" : "infer",
286
+ "specificationVersion" : 9,
287
+ "stateSchema" : [
288
+ {
289
+ "dataType" : "Float16",
290
+ "isOptional" : "0",
291
+ "formattedType" : "State (Float16 32 × 8 × 1024 × 64)",
292
+ "shortDescription" : "",
293
+ "shape" : "[32, 8, 1024, 64]",
294
+ "name" : "model_model_kv_cache_0",
295
+ "type" : "State"
296
+ }
297
+ ],
298
+ "computePrecision" : "Mixed (Float16, Int16, Int32, UInt16)",
299
+ "mlProgramOperationTypeHistogram" : {
300
+ "Ios18.expandDims" : 64,
301
+ "Ios18.mul" : 226,
302
+ "Ios18.matmul" : 32,
303
+ "Identity" : 1,
304
+ "Ios18.exp" : 16,
305
+ "Ios18.realDiv" : 16,
306
+ "Ios18.greaterEqual" : 1,
307
+ "Select" : 1,
308
+ "Ios18.readState" : 33,
309
+ "Tile" : 32,
310
+ "Ios18.gather" : 2,
311
+ "Ios18.add" : 82,
312
+ "Ios18.layerNorm" : 33,
313
+ "Ios18.sliceUpdate" : 32,
314
+ "Ios18.writeState" : 32,
315
+ "Ios18.reshape" : 98,
316
+ "Ios16.reduceMax" : 16,
317
+ "Ios16.reduceSum" : 16,
318
+ "Ios18.constexprLutToDense" : 112,
319
+ "Ios18.conv" : 96,
320
+ "Ios18.concat" : 129,
321
+ "Ios18.transpose" : 64,
322
+ "Ios18.sub" : 48,
323
+ "Ios18.cast" : 2,
324
+ "Ios18.linear" : 16,
325
+ "Ios18.silu" : 16,
326
+ "Ios18.sliceByIndex" : 131,
327
+ "Ios18.squeeze" : 48
328
+ },
329
+ "shortDescription" : "Anemll Model: Multifunction FFN+Prefill",
330
+ "generatedClassName" : "llama_FFN_PF_lut6_chunk_01of01",
331
+ "author" : "Converted with Anemll v0.3.3",
332
+ "modelType" : {
333
+ "name" : "MLModelType_mlProgram"
334
+ }
335
+ }
336
+ ]
llama_FFN_PF_lut6_chunk_01of01.mlmodelc/model.mil ADDED
The diff for this file is too large to render. See raw diff
 
llama_FFN_PF_lut6_chunk_01of01.mlmodelc/weights/weight.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d68754de1cdebe13265bc030a0a5296868cc3bf094b950e109e4309946b39ce5
3
+ size 736518464
llama_embeddings.mlmodelc/analytics/coremldata.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:49d2dea024cd283240d5e36256dcf6a19c5cbe7be248340cc3e8f4519bdd07a2
3
+ size 243
llama_embeddings.mlmodelc/coremldata.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:900ae9030929e0c486d333ea84228062e60fd088d8e7b879fa1cb58575919241
3
+ size 501
llama_embeddings.mlmodelc/metadata.json ADDED
@@ -0,0 +1,67 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "shortDescription" : "Anemll Model (Embeddings) converted to CoreML",
4
+ "metadataOutputVersion" : "3.0",
5
+ "outputSchema" : [
6
+ {
7
+ "hasShapeFlexibility" : "0",
8
+ "isOptional" : "0",
9
+ "dataType" : "Float16",
10
+ "formattedType" : "MultiArray (Float16)",
11
+ "shortDescription" : "",
12
+ "shape" : "[]",
13
+ "name" : "hidden_states",
14
+ "type" : "MultiArray"
15
+ }
16
+ ],
17
+ "version" : "0.3.3",
18
+ "modelParameters" : [
19
+
20
+ ],
21
+ "author" : "Converted with Anemll v0.3.3",
22
+ "specificationVersion" : 9,
23
+ "storagePrecision" : "Float16",
24
+ "mlProgramOperationTypeHistogram" : {
25
+ "Ios18.gather" : 1
26
+ },
27
+ "computePrecision" : "Mixed (Float16, Int32)",
28
+ "stateSchema" : [
29
+
30
+ ],
31
+ "isUpdatable" : "0",
32
+ "availability" : {
33
+ "macOS" : "15.0",
34
+ "tvOS" : "18.0",
35
+ "visionOS" : "2.0",
36
+ "watchOS" : "11.0",
37
+ "iOS" : "18.0",
38
+ "macCatalyst" : "18.0"
39
+ },
40
+ "modelType" : {
41
+ "name" : "MLModelType_mlProgram"
42
+ },
43
+ "inputSchema" : [
44
+ {
45
+ "shortDescription" : "",
46
+ "dataType" : "Int32",
47
+ "hasShapeFlexibility" : "1",
48
+ "isOptional" : "0",
49
+ "shapeFlexibility" : "1 × 1 | 1 × 64",
50
+ "formattedType" : "MultiArray (Int32 1 × 1)",
51
+ "type" : "MultiArray",
52
+ "shape" : "[1, 1]",
53
+ "name" : "input_ids",
54
+ "enumeratedShapes" : "[[1, 1], [1, 64]]"
55
+ }
56
+ ],
57
+ "userDefinedMetadata" : {
58
+ "com.anemll.context_length" : "1024",
59
+ "com.anemll.info" : "Converted with Anemll v0.3.3",
60
+ "com.github.apple.coremltools.source" : "torch==2.5.0",
61
+ "com.github.apple.coremltools.version" : "8.3.0",
62
+ "com.github.apple.coremltools.source_dialect" : "TorchScript"
63
+ },
64
+ "generatedClassName" : "llama_embeddings",
65
+ "method" : "predict"
66
+ }
67
+ ]
llama_embeddings.mlmodelc/model.mil ADDED
@@ -0,0 +1,11 @@
 
 
 
 
 
 
 
 
 
 
 
 
1
+ program(1.3)
2
+ [buildInfo = dict<string, string>({{"coremlc-component-MIL", "3500.11.1"}, {"coremlc-version", "3500.21.1"}, {"coremltools-component-torch", "2.5.0"}, {"coremltools-source-dialect", "TorchScript"}, {"coremltools-version", "8.3.0"}})]
3
+ {
4
+ func main<ios18>(tensor<int32, [1, ?]> input_ids) [FlexibleShapeInformation = tuple<tuple<string, dict<string, tensor<int32, [?]>>>, tuple<string, dict<string, dict<string, tensor<int32, [?]>>>>>((("DefaultShapes", {{"input_ids", [1, 1]}}), ("EnumeratedShapes", {{"79ae981e", {{"input_ids", [1, 1]}}}, {"ed9b58c8", {{"input_ids", [1, 64]}}}})))] {
5
+ int32 hidden_states_axis_0 = const()[name = string("hidden_states_axis_0"), val = int32(0)];
6
+ int32 hidden_states_batch_dims_0 = const()[name = string("hidden_states_batch_dims_0"), val = int32(0)];
7
+ bool hidden_states_validate_indices_0 = const()[name = string("hidden_states_validate_indices_0"), val = bool(false)];
8
+ tensor<fp16, [128256, 2048]> embed_tokens_weight_to_fp16 = const()[name = string("embed_tokens_weight_to_fp16"), val = tensor<fp16, [128256, 2048]>(BLOBFILE(path = string("@model_path/weights/weight.bin"), offset = uint64(64)))];
9
+ tensor<fp16, [1, ?, 2048]> hidden_states = gather(axis = hidden_states_axis_0, batch_dims = hidden_states_batch_dims_0, indices = input_ids, validate_indices = hidden_states_validate_indices_0, x = embed_tokens_weight_to_fp16)[name = string("hidden_states_cast_fp16")];
10
+ } -> (hidden_states);
11
+ }
llama_embeddings.mlmodelc/weights/weight.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:73f76c5cbd933c0ee67f251d2278431346670fa90b5891d58ffd859af8e8003e
3
+ size 525336704
llama_lm_head_lut6.mlmodelc/analytics/coremldata.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8c2f8afc5837ce29e4502a30d8a4cadcb8620de7f346f9f36e054fd19a02b780
3
+ size 243
llama_lm_head_lut6.mlmodelc/coremldata.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0a728f19cf39c3effe946e0b90855005914899bb577694d883cd5497667efccf
3
+ size 691
llama_lm_head_lut6.mlmodelc/metadata.json ADDED
@@ -0,0 +1,140 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "shortDescription" : "Anemll Model (LM Head) converted to CoreML",
4
+ "metadataOutputVersion" : "3.0",
5
+ "outputSchema" : [
6
+ {
7
+ "hasShapeFlexibility" : "0",
8
+ "isOptional" : "0",
9
+ "dataType" : "Float16",
10
+ "formattedType" : "MultiArray (Float16 1 × 1 × 16032)",
11
+ "shortDescription" : "",
12
+ "shape" : "[1, 1, 16032]",
13
+ "name" : "logits1",
14
+ "type" : "MultiArray"
15
+ },
16
+ {
17
+ "hasShapeFlexibility" : "0",
18
+ "isOptional" : "0",
19
+ "dataType" : "Float16",
20
+ "formattedType" : "MultiArray (Float16 1 × 1 × 16032)",
21
+ "shortDescription" : "",
22
+ "shape" : "[1, 1, 16032]",
23
+ "name" : "logits2",
24
+ "type" : "MultiArray"
25
+ },
26
+ {
27
+ "hasShapeFlexibility" : "0",
28
+ "isOptional" : "0",
29
+ "dataType" : "Float16",
30
+ "formattedType" : "MultiArray (Float16 1 × 1 × 16032)",
31
+ "shortDescription" : "",
32
+ "shape" : "[1, 1, 16032]",
33
+ "name" : "logits3",
34
+ "type" : "MultiArray"
35
+ },
36
+ {
37
+ "hasShapeFlexibility" : "0",
38
+ "isOptional" : "0",
39
+ "dataType" : "Float16",
40
+ "formattedType" : "MultiArray (Float16 1 × 1 × 16032)",
41
+ "shortDescription" : "",
42
+ "shape" : "[1, 1, 16032]",
43
+ "name" : "logits4",
44
+ "type" : "MultiArray"
45
+ },
46
+ {
47
+ "hasShapeFlexibility" : "0",
48
+ "isOptional" : "0",
49
+ "dataType" : "Float16",
50
+ "formattedType" : "MultiArray (Float16 1 × 1 × 16032)",
51
+ "shortDescription" : "",
52
+ "shape" : "[1, 1, 16032]",
53
+ "name" : "logits5",
54
+ "type" : "MultiArray"
55
+ },
56
+ {
57
+ "hasShapeFlexibility" : "0",
58
+ "isOptional" : "0",
59
+ "dataType" : "Float16",
60
+ "formattedType" : "MultiArray (Float16 1 × 1 × 16032)",
61
+ "shortDescription" : "",
62
+ "shape" : "[1, 1, 16032]",
63
+ "name" : "logits6",
64
+ "type" : "MultiArray"
65
+ },
66
+ {
67
+ "hasShapeFlexibility" : "0",
68
+ "isOptional" : "0",
69
+ "dataType" : "Float16",
70
+ "formattedType" : "MultiArray (Float16 1 × 1 × 16032)",
71
+ "shortDescription" : "",
72
+ "shape" : "[1, 1, 16032]",
73
+ "name" : "logits7",
74
+ "type" : "MultiArray"
75
+ },
76
+ {
77
+ "hasShapeFlexibility" : "0",
78
+ "isOptional" : "0",
79
+ "dataType" : "Float16",
80
+ "formattedType" : "MultiArray (Float16 1 × 1 × 16032)",
81
+ "shortDescription" : "",
82
+ "shape" : "[1, 1, 16032]",
83
+ "name" : "logits8",
84
+ "type" : "MultiArray"
85
+ }
86
+ ],
87
+ "version" : "0.3.3",
88
+ "modelParameters" : [
89
+
90
+ ],
91
+ "author" : "Converted with Anemll v0.3.3",
92
+ "specificationVersion" : 9,
93
+ "storagePrecision" : "Mixed (Float16, Palettized (17 bits), UInt6)",
94
+ "mlProgramOperationTypeHistogram" : {
95
+ "Ios18.transpose" : 9,
96
+ "Ios18.constexprLutToDense" : 8,
97
+ "Ios18.expandDims" : 1,
98
+ "Ios18.conv" : 8,
99
+ "Ios18.squeeze" : 8
100
+ },
101
+ "computePrecision" : "Mixed (Float16, Int32)",
102
+ "stateSchema" : [
103
+
104
+ ],
105
+ "isUpdatable" : "0",
106
+ "availability" : {
107
+ "macOS" : "15.0",
108
+ "tvOS" : "18.0",
109
+ "visionOS" : "2.0",
110
+ "watchOS" : "11.0",
111
+ "iOS" : "18.0",
112
+ "macCatalyst" : "18.0"
113
+ },
114
+ "modelType" : {
115
+ "name" : "MLModelType_mlProgram"
116
+ },
117
+ "inputSchema" : [
118
+ {
119
+ "hasShapeFlexibility" : "0",
120
+ "isOptional" : "0",
121
+ "dataType" : "Float16",
122
+ "formattedType" : "MultiArray (Float16 1 × 1 × 2048)",
123
+ "shortDescription" : "",
124
+ "shape" : "[1, 1, 2048]",
125
+ "name" : "hidden_states",
126
+ "type" : "MultiArray"
127
+ }
128
+ ],
129
+ "userDefinedMetadata" : {
130
+ "com.github.apple.coremltools.source_dialect" : "TorchScript",
131
+ "com.anemll.info" : "Converted with Anemll v0.3.3",
132
+ "com.anemll.lut_bits" : "6",
133
+ "com.github.apple.coremltools.source" : "torch==2.5.0",
134
+ "com.github.apple.coremltools.version" : "8.3.0",
135
+ "com.anemll.context_length" : "1024"
136
+ },
137
+ "generatedClassName" : "llama_lm_head_lut6",
138
+ "method" : "predict"
139
+ }
140
+ ]
llama_lm_head_lut6.mlmodelc/model.mil ADDED
@@ -0,0 +1,98 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ program(1.3)
2
+ [buildInfo = dict<string, string>({{"coremlc-component-MIL", "3500.11.1"}, {"coremlc-version", "3500.21.1"}})]
3
+ {
4
+ func main<ios18>(tensor<fp16, [1, 1, 2048]> hidden_states) {
5
+ tensor<int32, [3]> var_5 = const()[name = string("op_5"), val = tensor<int32, [3]>([0, 2, 1])];
6
+ tensor<int32, [1]> input_axes_0 = const()[name = string("input_axes_0"), val = tensor<int32, [1]>([2])];
7
+ tensor<fp16, [1, 2048, 1]> var_6_cast_fp16 = transpose(perm = var_5, x = hidden_states)[name = string("transpose_8")];
8
+ tensor<fp16, [1, 2048, 1, 1]> input_cast_fp16 = expand_dims(axes = input_axes_0, x = var_6_cast_fp16)[name = string("input_cast_fp16")];
9
+ string var_29_pad_type_0 = const()[name = string("op_29_pad_type_0"), val = string("valid")];
10
+ tensor<int32, [2]> var_29_strides_0 = const()[name = string("op_29_strides_0"), val = tensor<int32, [2]>([1, 1])];
11
+ tensor<int32, [4]> var_29_pad_0 = const()[name = string("op_29_pad_0"), val = tensor<int32, [4]>([0, 0, 0, 0])];
12
+ tensor<int32, [2]> var_29_dilations_0 = const()[name = string("op_29_dilations_0"), val = tensor<int32, [2]>([1, 1])];
13
+ int32 var_29_groups_0 = const()[name = string("op_29_groups_0"), val = int32(1)];
14
+ tensor<fp16, [16032, 2048, 1, 1]> op_9_promoted_to_fp16_palettized = constexpr_lut_to_dense(indices = tensor<uint6, [16032, 2048, 1, 1]>(BLOBFILE(path = string("@model_path/weights/weight.bin"), offset = uint64(64))), lut = tensor<fp16, [2004, 1, 1, 1, 64, 1]>(BLOBFILE(path = string("@model_path/weights/weight.bin"), offset = uint64(24625280))))[name = string("op_9_promoted_to_fp16_palettized")];
15
+ tensor<fp16, [1, 16032, 1, 1]> var_29_cast_fp16 = conv(dilations = var_29_dilations_0, groups = var_29_groups_0, pad = var_29_pad_0, pad_type = var_29_pad_type_0, strides = var_29_strides_0, weight = op_9_promoted_to_fp16_palettized, x = input_cast_fp16)[name = string("op_29_cast_fp16")];
16
+ tensor<int32, [1]> var_31_axes_0 = const()[name = string("op_31_axes_0"), val = tensor<int32, [1]>([2])];
17
+ tensor<fp16, [1, 16032, 1]> var_31_cast_fp16 = squeeze(axes = var_31_axes_0, x = var_29_cast_fp16)[name = string("op_31_cast_fp16")];
18
+ tensor<int32, [3]> var_34_perm_0 = const()[name = string("op_34_perm_0"), val = tensor<int32, [3]>([0, 2, 1])];
19
+ string var_55_pad_type_0 = const()[name = string("op_55_pad_type_0"), val = string("valid")];
20
+ tensor<int32, [2]> var_55_strides_0 = const()[name = string("op_55_strides_0"), val = tensor<int32, [2]>([1, 1])];
21
+ tensor<int32, [4]> var_55_pad_0 = const()[name = string("op_55_pad_0"), val = tensor<int32, [4]>([0, 0, 0, 0])];
22
+ tensor<int32, [2]> var_55_dilations_0 = const()[name = string("op_55_dilations_0"), val = tensor<int32, [2]>([1, 1])];
23
+ int32 var_55_groups_0 = const()[name = string("op_55_groups_0"), val = int32(1)];
24
+ tensor<fp16, [16032, 2048, 1, 1]> op_35_promoted_to_fp16_palettized = constexpr_lut_to_dense(indices = tensor<uint6, [16032, 2048, 1, 1]>(BLOBFILE(path = string("@model_path/weights/weight.bin"), offset = uint64(24881856))), lut = tensor<fp16, [2004, 1, 1, 1, 64, 1]>(BLOBFILE(path = string("@model_path/weights/weight.bin"), offset = uint64(49507072))))[name = string("op_35_promoted_to_fp16_palettized")];
25
+ tensor<fp16, [1, 16032, 1, 1]> var_55_cast_fp16 = conv(dilations = var_55_dilations_0, groups = var_55_groups_0, pad = var_55_pad_0, pad_type = var_55_pad_type_0, strides = var_55_strides_0, weight = op_35_promoted_to_fp16_palettized, x = input_cast_fp16)[name = string("op_55_cast_fp16")];
26
+ tensor<int32, [1]> var_57_axes_0 = const()[name = string("op_57_axes_0"), val = tensor<int32, [1]>([2])];
27
+ tensor<fp16, [1, 16032, 1]> var_57_cast_fp16 = squeeze(axes = var_57_axes_0, x = var_55_cast_fp16)[name = string("op_57_cast_fp16")];
28
+ tensor<int32, [3]> var_60_perm_0 = const()[name = string("op_60_perm_0"), val = tensor<int32, [3]>([0, 2, 1])];
29
+ string var_81_pad_type_0 = const()[name = string("op_81_pad_type_0"), val = string("valid")];
30
+ tensor<int32, [2]> var_81_strides_0 = const()[name = string("op_81_strides_0"), val = tensor<int32, [2]>([1, 1])];
31
+ tensor<int32, [4]> var_81_pad_0 = const()[name = string("op_81_pad_0"), val = tensor<int32, [4]>([0, 0, 0, 0])];
32
+ tensor<int32, [2]> var_81_dilations_0 = const()[name = string("op_81_dilations_0"), val = tensor<int32, [2]>([1, 1])];
33
+ int32 var_81_groups_0 = const()[name = string("op_81_groups_0"), val = int32(1)];
34
+ tensor<fp16, [16032, 2048, 1, 1]> op_61_promoted_to_fp16_palettized = constexpr_lut_to_dense(indices = tensor<uint6, [16032, 2048, 1, 1]>(BLOBFILE(path = string("@model_path/weights/weight.bin"), offset = uint64(49763648))), lut = tensor<fp16, [2004, 1, 1, 1, 64, 1]>(BLOBFILE(path = string("@model_path/weights/weight.bin"), offset = uint64(74388864))))[name = string("op_61_promoted_to_fp16_palettized")];
35
+ tensor<fp16, [1, 16032, 1, 1]> var_81_cast_fp16 = conv(dilations = var_81_dilations_0, groups = var_81_groups_0, pad = var_81_pad_0, pad_type = var_81_pad_type_0, strides = var_81_strides_0, weight = op_61_promoted_to_fp16_palettized, x = input_cast_fp16)[name = string("op_81_cast_fp16")];
36
+ tensor<int32, [1]> var_83_axes_0 = const()[name = string("op_83_axes_0"), val = tensor<int32, [1]>([2])];
37
+ tensor<fp16, [1, 16032, 1]> var_83_cast_fp16 = squeeze(axes = var_83_axes_0, x = var_81_cast_fp16)[name = string("op_83_cast_fp16")];
38
+ tensor<int32, [3]> var_86_perm_0 = const()[name = string("op_86_perm_0"), val = tensor<int32, [3]>([0, 2, 1])];
39
+ string var_107_pad_type_0 = const()[name = string("op_107_pad_type_0"), val = string("valid")];
40
+ tensor<int32, [2]> var_107_strides_0 = const()[name = string("op_107_strides_0"), val = tensor<int32, [2]>([1, 1])];
41
+ tensor<int32, [4]> var_107_pad_0 = const()[name = string("op_107_pad_0"), val = tensor<int32, [4]>([0, 0, 0, 0])];
42
+ tensor<int32, [2]> var_107_dilations_0 = const()[name = string("op_107_dilations_0"), val = tensor<int32, [2]>([1, 1])];
43
+ int32 var_107_groups_0 = const()[name = string("op_107_groups_0"), val = int32(1)];
44
+ tensor<fp16, [16032, 2048, 1, 1]> op_87_promoted_to_fp16_palettized = constexpr_lut_to_dense(indices = tensor<uint6, [16032, 2048, 1, 1]>(BLOBFILE(path = string("@model_path/weights/weight.bin"), offset = uint64(74645440))), lut = tensor<fp16, [2004, 1, 1, 1, 64, 1]>(BLOBFILE(path = string("@model_path/weights/weight.bin"), offset = uint64(99270656))))[name = string("op_87_promoted_to_fp16_palettized")];
45
+ tensor<fp16, [1, 16032, 1, 1]> var_107_cast_fp16 = conv(dilations = var_107_dilations_0, groups = var_107_groups_0, pad = var_107_pad_0, pad_type = var_107_pad_type_0, strides = var_107_strides_0, weight = op_87_promoted_to_fp16_palettized, x = input_cast_fp16)[name = string("op_107_cast_fp16")];
46
+ tensor<int32, [1]> var_109_axes_0 = const()[name = string("op_109_axes_0"), val = tensor<int32, [1]>([2])];
47
+ tensor<fp16, [1, 16032, 1]> var_109_cast_fp16 = squeeze(axes = var_109_axes_0, x = var_107_cast_fp16)[name = string("op_109_cast_fp16")];
48
+ tensor<int32, [3]> var_112_perm_0 = const()[name = string("op_112_perm_0"), val = tensor<int32, [3]>([0, 2, 1])];
49
+ string var_133_pad_type_0 = const()[name = string("op_133_pad_type_0"), val = string("valid")];
50
+ tensor<int32, [2]> var_133_strides_0 = const()[name = string("op_133_strides_0"), val = tensor<int32, [2]>([1, 1])];
51
+ tensor<int32, [4]> var_133_pad_0 = const()[name = string("op_133_pad_0"), val = tensor<int32, [4]>([0, 0, 0, 0])];
52
+ tensor<int32, [2]> var_133_dilations_0 = const()[name = string("op_133_dilations_0"), val = tensor<int32, [2]>([1, 1])];
53
+ int32 var_133_groups_0 = const()[name = string("op_133_groups_0"), val = int32(1)];
54
+ tensor<fp16, [16032, 2048, 1, 1]> op_113_promoted_to_fp16_palettized = constexpr_lut_to_dense(indices = tensor<uint6, [16032, 2048, 1, 1]>(BLOBFILE(path = string("@model_path/weights/weight.bin"), offset = uint64(99527232))), lut = tensor<fp16, [2004, 1, 1, 1, 64, 1]>(BLOBFILE(path = string("@model_path/weights/weight.bin"), offset = uint64(124152448))))[name = string("op_113_promoted_to_fp16_palettized")];
55
+ tensor<fp16, [1, 16032, 1, 1]> var_133_cast_fp16 = conv(dilations = var_133_dilations_0, groups = var_133_groups_0, pad = var_133_pad_0, pad_type = var_133_pad_type_0, strides = var_133_strides_0, weight = op_113_promoted_to_fp16_palettized, x = input_cast_fp16)[name = string("op_133_cast_fp16")];
56
+ tensor<int32, [1]> var_135_axes_0 = const()[name = string("op_135_axes_0"), val = tensor<int32, [1]>([2])];
57
+ tensor<fp16, [1, 16032, 1]> var_135_cast_fp16 = squeeze(axes = var_135_axes_0, x = var_133_cast_fp16)[name = string("op_135_cast_fp16")];
58
+ tensor<int32, [3]> var_138_perm_0 = const()[name = string("op_138_perm_0"), val = tensor<int32, [3]>([0, 2, 1])];
59
+ string var_159_pad_type_0 = const()[name = string("op_159_pad_type_0"), val = string("valid")];
60
+ tensor<int32, [2]> var_159_strides_0 = const()[name = string("op_159_strides_0"), val = tensor<int32, [2]>([1, 1])];
61
+ tensor<int32, [4]> var_159_pad_0 = const()[name = string("op_159_pad_0"), val = tensor<int32, [4]>([0, 0, 0, 0])];
62
+ tensor<int32, [2]> var_159_dilations_0 = const()[name = string("op_159_dilations_0"), val = tensor<int32, [2]>([1, 1])];
63
+ int32 var_159_groups_0 = const()[name = string("op_159_groups_0"), val = int32(1)];
64
+ tensor<fp16, [16032, 2048, 1, 1]> op_139_promoted_to_fp16_palettized = constexpr_lut_to_dense(indices = tensor<uint6, [16032, 2048, 1, 1]>(BLOBFILE(path = string("@model_path/weights/weight.bin"), offset = uint64(124409024))), lut = tensor<fp16, [2004, 1, 1, 1, 64, 1]>(BLOBFILE(path = string("@model_path/weights/weight.bin"), offset = uint64(149034240))))[name = string("op_139_promoted_to_fp16_palettized")];
65
+ tensor<fp16, [1, 16032, 1, 1]> var_159_cast_fp16 = conv(dilations = var_159_dilations_0, groups = var_159_groups_0, pad = var_159_pad_0, pad_type = var_159_pad_type_0, strides = var_159_strides_0, weight = op_139_promoted_to_fp16_palettized, x = input_cast_fp16)[name = string("op_159_cast_fp16")];
66
+ tensor<int32, [1]> var_161_axes_0 = const()[name = string("op_161_axes_0"), val = tensor<int32, [1]>([2])];
67
+ tensor<fp16, [1, 16032, 1]> var_161_cast_fp16 = squeeze(axes = var_161_axes_0, x = var_159_cast_fp16)[name = string("op_161_cast_fp16")];
68
+ tensor<int32, [3]> var_164_perm_0 = const()[name = string("op_164_perm_0"), val = tensor<int32, [3]>([0, 2, 1])];
69
+ string var_185_pad_type_0 = const()[name = string("op_185_pad_type_0"), val = string("valid")];
70
+ tensor<int32, [2]> var_185_strides_0 = const()[name = string("op_185_strides_0"), val = tensor<int32, [2]>([1, 1])];
71
+ tensor<int32, [4]> var_185_pad_0 = const()[name = string("op_185_pad_0"), val = tensor<int32, [4]>([0, 0, 0, 0])];
72
+ tensor<int32, [2]> var_185_dilations_0 = const()[name = string("op_185_dilations_0"), val = tensor<int32, [2]>([1, 1])];
73
+ int32 var_185_groups_0 = const()[name = string("op_185_groups_0"), val = int32(1)];
74
+ tensor<fp16, [16032, 2048, 1, 1]> op_165_promoted_to_fp16_palettized = constexpr_lut_to_dense(indices = tensor<uint6, [16032, 2048, 1, 1]>(BLOBFILE(path = string("@model_path/weights/weight.bin"), offset = uint64(149290816))), lut = tensor<fp16, [2004, 1, 1, 1, 64, 1]>(BLOBFILE(path = string("@model_path/weights/weight.bin"), offset = uint64(173916032))))[name = string("op_165_promoted_to_fp16_palettized")];
75
+ tensor<fp16, [1, 16032, 1, 1]> var_185_cast_fp16 = conv(dilations = var_185_dilations_0, groups = var_185_groups_0, pad = var_185_pad_0, pad_type = var_185_pad_type_0, strides = var_185_strides_0, weight = op_165_promoted_to_fp16_palettized, x = input_cast_fp16)[name = string("op_185_cast_fp16")];
76
+ tensor<int32, [1]> var_187_axes_0 = const()[name = string("op_187_axes_0"), val = tensor<int32, [1]>([2])];
77
+ tensor<fp16, [1, 16032, 1]> var_187_cast_fp16 = squeeze(axes = var_187_axes_0, x = var_185_cast_fp16)[name = string("op_187_cast_fp16")];
78
+ tensor<int32, [3]> var_190_perm_0 = const()[name = string("op_190_perm_0"), val = tensor<int32, [3]>([0, 2, 1])];
79
+ string var_211_pad_type_0 = const()[name = string("op_211_pad_type_0"), val = string("valid")];
80
+ tensor<int32, [2]> var_211_strides_0 = const()[name = string("op_211_strides_0"), val = tensor<int32, [2]>([1, 1])];
81
+ tensor<int32, [4]> var_211_pad_0 = const()[name = string("op_211_pad_0"), val = tensor<int32, [4]>([0, 0, 0, 0])];
82
+ tensor<int32, [2]> var_211_dilations_0 = const()[name = string("op_211_dilations_0"), val = tensor<int32, [2]>([1, 1])];
83
+ int32 var_211_groups_0 = const()[name = string("op_211_groups_0"), val = int32(1)];
84
+ tensor<fp16, [16032, 2048, 1, 1]> op_191_promoted_to_fp16_palettized = constexpr_lut_to_dense(indices = tensor<uint6, [16032, 2048, 1, 1]>(BLOBFILE(path = string("@model_path/weights/weight.bin"), offset = uint64(174172608))), lut = tensor<fp16, [2004, 1, 1, 1, 64, 1]>(BLOBFILE(path = string("@model_path/weights/weight.bin"), offset = uint64(198797824))))[name = string("op_191_promoted_to_fp16_palettized")];
85
+ tensor<fp16, [1, 16032, 1, 1]> var_211_cast_fp16 = conv(dilations = var_211_dilations_0, groups = var_211_groups_0, pad = var_211_pad_0, pad_type = var_211_pad_type_0, strides = var_211_strides_0, weight = op_191_promoted_to_fp16_palettized, x = input_cast_fp16)[name = string("op_211_cast_fp16")];
86
+ tensor<int32, [1]> var_213_axes_0 = const()[name = string("op_213_axes_0"), val = tensor<int32, [1]>([2])];
87
+ tensor<fp16, [1, 16032, 1]> var_213_cast_fp16 = squeeze(axes = var_213_axes_0, x = var_211_cast_fp16)[name = string("op_213_cast_fp16")];
88
+ tensor<int32, [3]> var_216_perm_0 = const()[name = string("op_216_perm_0"), val = tensor<int32, [3]>([0, 2, 1])];
89
+ tensor<fp16, [1, 1, 16032]> logits1 = transpose(perm = var_34_perm_0, x = var_31_cast_fp16)[name = string("transpose_0")];
90
+ tensor<fp16, [1, 1, 16032]> logits2 = transpose(perm = var_60_perm_0, x = var_57_cast_fp16)[name = string("transpose_1")];
91
+ tensor<fp16, [1, 1, 16032]> logits3 = transpose(perm = var_86_perm_0, x = var_83_cast_fp16)[name = string("transpose_2")];
92
+ tensor<fp16, [1, 1, 16032]> logits4 = transpose(perm = var_112_perm_0, x = var_109_cast_fp16)[name = string("transpose_3")];
93
+ tensor<fp16, [1, 1, 16032]> logits5 = transpose(perm = var_138_perm_0, x = var_135_cast_fp16)[name = string("transpose_4")];
94
+ tensor<fp16, [1, 1, 16032]> logits6 = transpose(perm = var_164_perm_0, x = var_161_cast_fp16)[name = string("transpose_5")];
95
+ tensor<fp16, [1, 1, 16032]> logits7 = transpose(perm = var_190_perm_0, x = var_187_cast_fp16)[name = string("transpose_6")];
96
+ tensor<fp16, [1, 1, 16032]> logits8 = transpose(perm = var_216_perm_0, x = var_213_cast_fp16)[name = string("transpose_7")];
97
+ } -> (logits1, logits2, logits3, logits4, logits5, logits6, logits7, logits8);
98
+ }
llama_lm_head_lut6.mlmodelc/weights/weight.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:443f1da2f6ac784d4633ec9629a244b0f1b8a158f9374b7639fa9ed880396e91
3
+ size 199054400
meta.yaml ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ model_info:
2
+ name: anemll-meta-llama-Llama-3.2-1B-Instruct-ctx1024
3
+ version: 0.3.4
4
+ description: |
5
+ Demonstarates running meta-llama-Llama-3.2-1B-Instruct on Apple Neural Engine
6
+ Context length: 1024
7
+ Batch size: 64
8
+ Chunks: 1
9
+ license: MIT
10
+ author: Anemll
11
+ framework: Core ML
12
+ language: Python
13
+ architecture: llama
14
+ parameters:
15
+ context_length: 1024
16
+ batch_size: 64
17
+ lut_embeddings: none
18
+ lut_ffn: 6
19
+ lut_lmhead: 6
20
+ num_chunks: 1
21
+ model_prefix: llama
22
+ embeddings: llama_embeddings.mlmodelc
23
+ lm_head: llama_lm_head_lut6.mlmodelc
24
+ ffn: llama_FFN_PF_lut6.mlmodelc
25
+ split_lm_head: 8
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,2062 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "128000": {
4
+ "content": "<|begin_of_text|>",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "128001": {
12
+ "content": "<|end_of_text|>",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ },
19
+ "128002": {
20
+ "content": "<|reserved_special_token_0|>",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ },
27
+ "128003": {
28
+ "content": "<|reserved_special_token_1|>",
29
+ "lstrip": false,
30
+ "normalized": false,
31
+ "rstrip": false,
32
+ "single_word": false,
33
+ "special": true
34
+ },
35
+ "128004": {
36
+ "content": "<|finetune_right_pad_id|>",
37
+ "lstrip": false,
38
+ "normalized": false,
39
+ "rstrip": false,
40
+ "single_word": false,
41
+ "special": true
42
+ },
43
+ "128005": {
44
+ "content": "<|reserved_special_token_2|>",
45
+ "lstrip": false,
46
+ "normalized": false,
47
+ "rstrip": false,
48
+ "single_word": false,
49
+ "special": true
50
+ },
51
+ "128006": {
52
+ "content": "<|start_header_id|>",
53
+ "lstrip": false,
54
+ "normalized": false,
55
+ "rstrip": false,
56
+ "single_word": false,
57
+ "special": true
58
+ },
59
+ "128007": {
60
+ "content": "<|end_header_id|>",
61
+ "lstrip": false,
62
+ "normalized": false,
63
+ "rstrip": false,
64
+ "single_word": false,
65
+ "special": true
66
+ },
67
+ "128008": {
68
+ "content": "<|eom_id|>",
69
+ "lstrip": false,
70
+ "normalized": false,
71
+ "rstrip": false,
72
+ "single_word": false,
73
+ "special": true
74
+ },
75
+ "128009": {
76
+ "content": "<|eot_id|>",
77
+ "lstrip": false,
78
+ "normalized": false,
79
+ "rstrip": false,
80
+ "single_word": false,
81
+ "special": true
82
+ },
83
+ "128010": {
84
+ "content": "<|python_tag|>",
85
+ "lstrip": false,
86
+ "normalized": false,
87
+ "rstrip": false,
88
+ "single_word": false,
89
+ "special": true
90
+ },
91
+ "128011": {
92
+ "content": "<|reserved_special_token_3|>",
93
+ "lstrip": false,
94
+ "normalized": false,
95
+ "rstrip": false,
96
+ "single_word": false,
97
+ "special": true
98
+ },
99
+ "128012": {
100
+ "content": "<|reserved_special_token_4|>",
101
+ "lstrip": false,
102
+ "normalized": false,
103
+ "rstrip": false,
104
+ "single_word": false,
105
+ "special": true
106
+ },
107
+ "128013": {
108
+ "content": "<|reserved_special_token_5|>",
109
+ "lstrip": false,
110
+ "normalized": false,
111
+ "rstrip": false,
112
+ "single_word": false,
113
+ "special": true
114
+ },
115
+ "128014": {
116
+ "content": "<|reserved_special_token_6|>",
117
+ "lstrip": false,
118
+ "normalized": false,
119
+ "rstrip": false,
120
+ "single_word": false,
121
+ "special": true
122
+ },
123
+ "128015": {
124
+ "content": "<|reserved_special_token_7|>",
125
+ "lstrip": false,
126
+ "normalized": false,
127
+ "rstrip": false,
128
+ "single_word": false,
129
+ "special": true
130
+ },
131
+ "128016": {
132
+ "content": "<|reserved_special_token_8|>",
133
+ "lstrip": false,
134
+ "normalized": false,
135
+ "rstrip": false,
136
+ "single_word": false,
137
+ "special": true
138
+ },
139
+ "128017": {
140
+ "content": "<|reserved_special_token_9|>",
141
+ "lstrip": false,
142
+ "normalized": false,
143
+ "rstrip": false,
144
+ "single_word": false,
145
+ "special": true
146
+ },
147
+ "128018": {
148
+ "content": "<|reserved_special_token_10|>",
149
+ "lstrip": false,
150
+ "normalized": false,
151
+ "rstrip": false,
152
+ "single_word": false,
153
+ "special": true
154
+ },
155
+ "128019": {
156
+ "content": "<|reserved_special_token_11|>",
157
+ "lstrip": false,
158
+ "normalized": false,
159
+ "rstrip": false,
160
+ "single_word": false,
161
+ "special": true
162
+ },
163
+ "128020": {
164
+ "content": "<|reserved_special_token_12|>",
165
+ "lstrip": false,
166
+ "normalized": false,
167
+ "rstrip": false,
168
+ "single_word": false,
169
+ "special": true
170
+ },
171
+ "128021": {
172
+ "content": "<|reserved_special_token_13|>",
173
+ "lstrip": false,
174
+ "normalized": false,
175
+ "rstrip": false,
176
+ "single_word": false,
177
+ "special": true
178
+ },
179
+ "128022": {
180
+ "content": "<|reserved_special_token_14|>",
181
+ "lstrip": false,
182
+ "normalized": false,
183
+ "rstrip": false,
184
+ "single_word": false,
185
+ "special": true
186
+ },
187
+ "128023": {
188
+ "content": "<|reserved_special_token_15|>",
189
+ "lstrip": false,
190
+ "normalized": false,
191
+ "rstrip": false,
192
+ "single_word": false,
193
+ "special": true
194
+ },
195
+ "128024": {
196
+ "content": "<|reserved_special_token_16|>",
197
+ "lstrip": false,
198
+ "normalized": false,
199
+ "rstrip": false,
200
+ "single_word": false,
201
+ "special": true
202
+ },
203
+ "128025": {
204
+ "content": "<|reserved_special_token_17|>",
205
+ "lstrip": false,
206
+ "normalized": false,
207
+ "rstrip": false,
208
+ "single_word": false,
209
+ "special": true
210
+ },
211
+ "128026": {
212
+ "content": "<|reserved_special_token_18|>",
213
+ "lstrip": false,
214
+ "normalized": false,
215
+ "rstrip": false,
216
+ "single_word": false,
217
+ "special": true
218
+ },
219
+ "128027": {
220
+ "content": "<|reserved_special_token_19|>",
221
+ "lstrip": false,
222
+ "normalized": false,
223
+ "rstrip": false,
224
+ "single_word": false,
225
+ "special": true
226
+ },
227
+ "128028": {
228
+ "content": "<|reserved_special_token_20|>",
229
+ "lstrip": false,
230
+ "normalized": false,
231
+ "rstrip": false,
232
+ "single_word": false,
233
+ "special": true
234
+ },
235
+ "128029": {
236
+ "content": "<|reserved_special_token_21|>",
237
+ "lstrip": false,
238
+ "normalized": false,
239
+ "rstrip": false,
240
+ "single_word": false,
241
+ "special": true
242
+ },
243
+ "128030": {
244
+ "content": "<|reserved_special_token_22|>",
245
+ "lstrip": false,
246
+ "normalized": false,
247
+ "rstrip": false,
248
+ "single_word": false,
249
+ "special": true
250
+ },
251
+ "128031": {
252
+ "content": "<|reserved_special_token_23|>",
253
+ "lstrip": false,
254
+ "normalized": false,
255
+ "rstrip": false,
256
+ "single_word": false,
257
+ "special": true
258
+ },
259
+ "128032": {
260
+ "content": "<|reserved_special_token_24|>",
261
+ "lstrip": false,
262
+ "normalized": false,
263
+ "rstrip": false,
264
+ "single_word": false,
265
+ "special": true
266
+ },
267
+ "128033": {
268
+ "content": "<|reserved_special_token_25|>",
269
+ "lstrip": false,
270
+ "normalized": false,
271
+ "rstrip": false,
272
+ "single_word": false,
273
+ "special": true
274
+ },
275
+ "128034": {
276
+ "content": "<|reserved_special_token_26|>",
277
+ "lstrip": false,
278
+ "normalized": false,
279
+ "rstrip": false,
280
+ "single_word": false,
281
+ "special": true
282
+ },
283
+ "128035": {
284
+ "content": "<|reserved_special_token_27|>",
285
+ "lstrip": false,
286
+ "normalized": false,
287
+ "rstrip": false,
288
+ "single_word": false,
289
+ "special": true
290
+ },
291
+ "128036": {
292
+ "content": "<|reserved_special_token_28|>",
293
+ "lstrip": false,
294
+ "normalized": false,
295
+ "rstrip": false,
296
+ "single_word": false,
297
+ "special": true
298
+ },
299
+ "128037": {
300
+ "content": "<|reserved_special_token_29|>",
301
+ "lstrip": false,
302
+ "normalized": false,
303
+ "rstrip": false,
304
+ "single_word": false,
305
+ "special": true
306
+ },
307
+ "128038": {
308
+ "content": "<|reserved_special_token_30|>",
309
+ "lstrip": false,
310
+ "normalized": false,
311
+ "rstrip": false,
312
+ "single_word": false,
313
+ "special": true
314
+ },
315
+ "128039": {
316
+ "content": "<|reserved_special_token_31|>",
317
+ "lstrip": false,
318
+ "normalized": false,
319
+ "rstrip": false,
320
+ "single_word": false,
321
+ "special": true
322
+ },
323
+ "128040": {
324
+ "content": "<|reserved_special_token_32|>",
325
+ "lstrip": false,
326
+ "normalized": false,
327
+ "rstrip": false,
328
+ "single_word": false,
329
+ "special": true
330
+ },
331
+ "128041": {
332
+ "content": "<|reserved_special_token_33|>",
333
+ "lstrip": false,
334
+ "normalized": false,
335
+ "rstrip": false,
336
+ "single_word": false,
337
+ "special": true
338
+ },
339
+ "128042": {
340
+ "content": "<|reserved_special_token_34|>",
341
+ "lstrip": false,
342
+ "normalized": false,
343
+ "rstrip": false,
344
+ "single_word": false,
345
+ "special": true
346
+ },
347
+ "128043": {
348
+ "content": "<|reserved_special_token_35|>",
349
+ "lstrip": false,
350
+ "normalized": false,
351
+ "rstrip": false,
352
+ "single_word": false,
353
+ "special": true
354
+ },
355
+ "128044": {
356
+ "content": "<|reserved_special_token_36|>",
357
+ "lstrip": false,
358
+ "normalized": false,
359
+ "rstrip": false,
360
+ "single_word": false,
361
+ "special": true
362
+ },
363
+ "128045": {
364
+ "content": "<|reserved_special_token_37|>",
365
+ "lstrip": false,
366
+ "normalized": false,
367
+ "rstrip": false,
368
+ "single_word": false,
369
+ "special": true
370
+ },
371
+ "128046": {
372
+ "content": "<|reserved_special_token_38|>",
373
+ "lstrip": false,
374
+ "normalized": false,
375
+ "rstrip": false,
376
+ "single_word": false,
377
+ "special": true
378
+ },
379
+ "128047": {
380
+ "content": "<|reserved_special_token_39|>",
381
+ "lstrip": false,
382
+ "normalized": false,
383
+ "rstrip": false,
384
+ "single_word": false,
385
+ "special": true
386
+ },
387
+ "128048": {
388
+ "content": "<|reserved_special_token_40|>",
389
+ "lstrip": false,
390
+ "normalized": false,
391
+ "rstrip": false,
392
+ "single_word": false,
393
+ "special": true
394
+ },
395
+ "128049": {
396
+ "content": "<|reserved_special_token_41|>",
397
+ "lstrip": false,
398
+ "normalized": false,
399
+ "rstrip": false,
400
+ "single_word": false,
401
+ "special": true
402
+ },
403
+ "128050": {
404
+ "content": "<|reserved_special_token_42|>",
405
+ "lstrip": false,
406
+ "normalized": false,
407
+ "rstrip": false,
408
+ "single_word": false,
409
+ "special": true
410
+ },
411
+ "128051": {
412
+ "content": "<|reserved_special_token_43|>",
413
+ "lstrip": false,
414
+ "normalized": false,
415
+ "rstrip": false,
416
+ "single_word": false,
417
+ "special": true
418
+ },
419
+ "128052": {
420
+ "content": "<|reserved_special_token_44|>",
421
+ "lstrip": false,
422
+ "normalized": false,
423
+ "rstrip": false,
424
+ "single_word": false,
425
+ "special": true
426
+ },
427
+ "128053": {
428
+ "content": "<|reserved_special_token_45|>",
429
+ "lstrip": false,
430
+ "normalized": false,
431
+ "rstrip": false,
432
+ "single_word": false,
433
+ "special": true
434
+ },
435
+ "128054": {
436
+ "content": "<|reserved_special_token_46|>",
437
+ "lstrip": false,
438
+ "normalized": false,
439
+ "rstrip": false,
440
+ "single_word": false,
441
+ "special": true
442
+ },
443
+ "128055": {
444
+ "content": "<|reserved_special_token_47|>",
445
+ "lstrip": false,
446
+ "normalized": false,
447
+ "rstrip": false,
448
+ "single_word": false,
449
+ "special": true
450
+ },
451
+ "128056": {
452
+ "content": "<|reserved_special_token_48|>",
453
+ "lstrip": false,
454
+ "normalized": false,
455
+ "rstrip": false,
456
+ "single_word": false,
457
+ "special": true
458
+ },
459
+ "128057": {
460
+ "content": "<|reserved_special_token_49|>",
461
+ "lstrip": false,
462
+ "normalized": false,
463
+ "rstrip": false,
464
+ "single_word": false,
465
+ "special": true
466
+ },
467
+ "128058": {
468
+ "content": "<|reserved_special_token_50|>",
469
+ "lstrip": false,
470
+ "normalized": false,
471
+ "rstrip": false,
472
+ "single_word": false,
473
+ "special": true
474
+ },
475
+ "128059": {
476
+ "content": "<|reserved_special_token_51|>",
477
+ "lstrip": false,
478
+ "normalized": false,
479
+ "rstrip": false,
480
+ "single_word": false,
481
+ "special": true
482
+ },
483
+ "128060": {
484
+ "content": "<|reserved_special_token_52|>",
485
+ "lstrip": false,
486
+ "normalized": false,
487
+ "rstrip": false,
488
+ "single_word": false,
489
+ "special": true
490
+ },
491
+ "128061": {
492
+ "content": "<|reserved_special_token_53|>",
493
+ "lstrip": false,
494
+ "normalized": false,
495
+ "rstrip": false,
496
+ "single_word": false,
497
+ "special": true
498
+ },
499
+ "128062": {
500
+ "content": "<|reserved_special_token_54|>",
501
+ "lstrip": false,
502
+ "normalized": false,
503
+ "rstrip": false,
504
+ "single_word": false,
505
+ "special": true
506
+ },
507
+ "128063": {
508
+ "content": "<|reserved_special_token_55|>",
509
+ "lstrip": false,
510
+ "normalized": false,
511
+ "rstrip": false,
512
+ "single_word": false,
513
+ "special": true
514
+ },
515
+ "128064": {
516
+ "content": "<|reserved_special_token_56|>",
517
+ "lstrip": false,
518
+ "normalized": false,
519
+ "rstrip": false,
520
+ "single_word": false,
521
+ "special": true
522
+ },
523
+ "128065": {
524
+ "content": "<|reserved_special_token_57|>",
525
+ "lstrip": false,
526
+ "normalized": false,
527
+ "rstrip": false,
528
+ "single_word": false,
529
+ "special": true
530
+ },
531
+ "128066": {
532
+ "content": "<|reserved_special_token_58|>",
533
+ "lstrip": false,
534
+ "normalized": false,
535
+ "rstrip": false,
536
+ "single_word": false,
537
+ "special": true
538
+ },
539
+ "128067": {
540
+ "content": "<|reserved_special_token_59|>",
541
+ "lstrip": false,
542
+ "normalized": false,
543
+ "rstrip": false,
544
+ "single_word": false,
545
+ "special": true
546
+ },
547
+ "128068": {
548
+ "content": "<|reserved_special_token_60|>",
549
+ "lstrip": false,
550
+ "normalized": false,
551
+ "rstrip": false,
552
+ "single_word": false,
553
+ "special": true
554
+ },
555
+ "128069": {
556
+ "content": "<|reserved_special_token_61|>",
557
+ "lstrip": false,
558
+ "normalized": false,
559
+ "rstrip": false,
560
+ "single_word": false,
561
+ "special": true
562
+ },
563
+ "128070": {
564
+ "content": "<|reserved_special_token_62|>",
565
+ "lstrip": false,
566
+ "normalized": false,
567
+ "rstrip": false,
568
+ "single_word": false,
569
+ "special": true
570
+ },
571
+ "128071": {
572
+ "content": "<|reserved_special_token_63|>",
573
+ "lstrip": false,
574
+ "normalized": false,
575
+ "rstrip": false,
576
+ "single_word": false,
577
+ "special": true
578
+ },
579
+ "128072": {
580
+ "content": "<|reserved_special_token_64|>",
581
+ "lstrip": false,
582
+ "normalized": false,
583
+ "rstrip": false,
584
+ "single_word": false,
585
+ "special": true
586
+ },
587
+ "128073": {
588
+ "content": "<|reserved_special_token_65|>",
589
+ "lstrip": false,
590
+ "normalized": false,
591
+ "rstrip": false,
592
+ "single_word": false,
593
+ "special": true
594
+ },
595
+ "128074": {
596
+ "content": "<|reserved_special_token_66|>",
597
+ "lstrip": false,
598
+ "normalized": false,
599
+ "rstrip": false,
600
+ "single_word": false,
601
+ "special": true
602
+ },
603
+ "128075": {
604
+ "content": "<|reserved_special_token_67|>",
605
+ "lstrip": false,
606
+ "normalized": false,
607
+ "rstrip": false,
608
+ "single_word": false,
609
+ "special": true
610
+ },
611
+ "128076": {
612
+ "content": "<|reserved_special_token_68|>",
613
+ "lstrip": false,
614
+ "normalized": false,
615
+ "rstrip": false,
616
+ "single_word": false,
617
+ "special": true
618
+ },
619
+ "128077": {
620
+ "content": "<|reserved_special_token_69|>",
621
+ "lstrip": false,
622
+ "normalized": false,
623
+ "rstrip": false,
624
+ "single_word": false,
625
+ "special": true
626
+ },
627
+ "128078": {
628
+ "content": "<|reserved_special_token_70|>",
629
+ "lstrip": false,
630
+ "normalized": false,
631
+ "rstrip": false,
632
+ "single_word": false,
633
+ "special": true
634
+ },
635
+ "128079": {
636
+ "content": "<|reserved_special_token_71|>",
637
+ "lstrip": false,
638
+ "normalized": false,
639
+ "rstrip": false,
640
+ "single_word": false,
641
+ "special": true
642
+ },
643
+ "128080": {
644
+ "content": "<|reserved_special_token_72|>",
645
+ "lstrip": false,
646
+ "normalized": false,
647
+ "rstrip": false,
648
+ "single_word": false,
649
+ "special": true
650
+ },
651
+ "128081": {
652
+ "content": "<|reserved_special_token_73|>",
653
+ "lstrip": false,
654
+ "normalized": false,
655
+ "rstrip": false,
656
+ "single_word": false,
657
+ "special": true
658
+ },
659
+ "128082": {
660
+ "content": "<|reserved_special_token_74|>",
661
+ "lstrip": false,
662
+ "normalized": false,
663
+ "rstrip": false,
664
+ "single_word": false,
665
+ "special": true
666
+ },
667
+ "128083": {
668
+ "content": "<|reserved_special_token_75|>",
669
+ "lstrip": false,
670
+ "normalized": false,
671
+ "rstrip": false,
672
+ "single_word": false,
673
+ "special": true
674
+ },
675
+ "128084": {
676
+ "content": "<|reserved_special_token_76|>",
677
+ "lstrip": false,
678
+ "normalized": false,
679
+ "rstrip": false,
680
+ "single_word": false,
681
+ "special": true
682
+ },
683
+ "128085": {
684
+ "content": "<|reserved_special_token_77|>",
685
+ "lstrip": false,
686
+ "normalized": false,
687
+ "rstrip": false,
688
+ "single_word": false,
689
+ "special": true
690
+ },
691
+ "128086": {
692
+ "content": "<|reserved_special_token_78|>",
693
+ "lstrip": false,
694
+ "normalized": false,
695
+ "rstrip": false,
696
+ "single_word": false,
697
+ "special": true
698
+ },
699
+ "128087": {
700
+ "content": "<|reserved_special_token_79|>",
701
+ "lstrip": false,
702
+ "normalized": false,
703
+ "rstrip": false,
704
+ "single_word": false,
705
+ "special": true
706
+ },
707
+ "128088": {
708
+ "content": "<|reserved_special_token_80|>",
709
+ "lstrip": false,
710
+ "normalized": false,
711
+ "rstrip": false,
712
+ "single_word": false,
713
+ "special": true
714
+ },
715
+ "128089": {
716
+ "content": "<|reserved_special_token_81|>",
717
+ "lstrip": false,
718
+ "normalized": false,
719
+ "rstrip": false,
720
+ "single_word": false,
721
+ "special": true
722
+ },
723
+ "128090": {
724
+ "content": "<|reserved_special_token_82|>",
725
+ "lstrip": false,
726
+ "normalized": false,
727
+ "rstrip": false,
728
+ "single_word": false,
729
+ "special": true
730
+ },
731
+ "128091": {
732
+ "content": "<|reserved_special_token_83|>",
733
+ "lstrip": false,
734
+ "normalized": false,
735
+ "rstrip": false,
736
+ "single_word": false,
737
+ "special": true
738
+ },
739
+ "128092": {
740
+ "content": "<|reserved_special_token_84|>",
741
+ "lstrip": false,
742
+ "normalized": false,
743
+ "rstrip": false,
744
+ "single_word": false,
745
+ "special": true
746
+ },
747
+ "128093": {
748
+ "content": "<|reserved_special_token_85|>",
749
+ "lstrip": false,
750
+ "normalized": false,
751
+ "rstrip": false,
752
+ "single_word": false,
753
+ "special": true
754
+ },
755
+ "128094": {
756
+ "content": "<|reserved_special_token_86|>",
757
+ "lstrip": false,
758
+ "normalized": false,
759
+ "rstrip": false,
760
+ "single_word": false,
761
+ "special": true
762
+ },
763
+ "128095": {
764
+ "content": "<|reserved_special_token_87|>",
765
+ "lstrip": false,
766
+ "normalized": false,
767
+ "rstrip": false,
768
+ "single_word": false,
769
+ "special": true
770
+ },
771
+ "128096": {
772
+ "content": "<|reserved_special_token_88|>",
773
+ "lstrip": false,
774
+ "normalized": false,
775
+ "rstrip": false,
776
+ "single_word": false,
777
+ "special": true
778
+ },
779
+ "128097": {
780
+ "content": "<|reserved_special_token_89|>",
781
+ "lstrip": false,
782
+ "normalized": false,
783
+ "rstrip": false,
784
+ "single_word": false,
785
+ "special": true
786
+ },
787
+ "128098": {
788
+ "content": "<|reserved_special_token_90|>",
789
+ "lstrip": false,
790
+ "normalized": false,
791
+ "rstrip": false,
792
+ "single_word": false,
793
+ "special": true
794
+ },
795
+ "128099": {
796
+ "content": "<|reserved_special_token_91|>",
797
+ "lstrip": false,
798
+ "normalized": false,
799
+ "rstrip": false,
800
+ "single_word": false,
801
+ "special": true
802
+ },
803
+ "128100": {
804
+ "content": "<|reserved_special_token_92|>",
805
+ "lstrip": false,
806
+ "normalized": false,
807
+ "rstrip": false,
808
+ "single_word": false,
809
+ "special": true
810
+ },
811
+ "128101": {
812
+ "content": "<|reserved_special_token_93|>",
813
+ "lstrip": false,
814
+ "normalized": false,
815
+ "rstrip": false,
816
+ "single_word": false,
817
+ "special": true
818
+ },
819
+ "128102": {
820
+ "content": "<|reserved_special_token_94|>",
821
+ "lstrip": false,
822
+ "normalized": false,
823
+ "rstrip": false,
824
+ "single_word": false,
825
+ "special": true
826
+ },
827
+ "128103": {
828
+ "content": "<|reserved_special_token_95|>",
829
+ "lstrip": false,
830
+ "normalized": false,
831
+ "rstrip": false,
832
+ "single_word": false,
833
+ "special": true
834
+ },
835
+ "128104": {
836
+ "content": "<|reserved_special_token_96|>",
837
+ "lstrip": false,
838
+ "normalized": false,
839
+ "rstrip": false,
840
+ "single_word": false,
841
+ "special": true
842
+ },
843
+ "128105": {
844
+ "content": "<|reserved_special_token_97|>",
845
+ "lstrip": false,
846
+ "normalized": false,
847
+ "rstrip": false,
848
+ "single_word": false,
849
+ "special": true
850
+ },
851
+ "128106": {
852
+ "content": "<|reserved_special_token_98|>",
853
+ "lstrip": false,
854
+ "normalized": false,
855
+ "rstrip": false,
856
+ "single_word": false,
857
+ "special": true
858
+ },
859
+ "128107": {
860
+ "content": "<|reserved_special_token_99|>",
861
+ "lstrip": false,
862
+ "normalized": false,
863
+ "rstrip": false,
864
+ "single_word": false,
865
+ "special": true
866
+ },
867
+ "128108": {
868
+ "content": "<|reserved_special_token_100|>",
869
+ "lstrip": false,
870
+ "normalized": false,
871
+ "rstrip": false,
872
+ "single_word": false,
873
+ "special": true
874
+ },
875
+ "128109": {
876
+ "content": "<|reserved_special_token_101|>",
877
+ "lstrip": false,
878
+ "normalized": false,
879
+ "rstrip": false,
880
+ "single_word": false,
881
+ "special": true
882
+ },
883
+ "128110": {
884
+ "content": "<|reserved_special_token_102|>",
885
+ "lstrip": false,
886
+ "normalized": false,
887
+ "rstrip": false,
888
+ "single_word": false,
889
+ "special": true
890
+ },
891
+ "128111": {
892
+ "content": "<|reserved_special_token_103|>",
893
+ "lstrip": false,
894
+ "normalized": false,
895
+ "rstrip": false,
896
+ "single_word": false,
897
+ "special": true
898
+ },
899
+ "128112": {
900
+ "content": "<|reserved_special_token_104|>",
901
+ "lstrip": false,
902
+ "normalized": false,
903
+ "rstrip": false,
904
+ "single_word": false,
905
+ "special": true
906
+ },
907
+ "128113": {
908
+ "content": "<|reserved_special_token_105|>",
909
+ "lstrip": false,
910
+ "normalized": false,
911
+ "rstrip": false,
912
+ "single_word": false,
913
+ "special": true
914
+ },
915
+ "128114": {
916
+ "content": "<|reserved_special_token_106|>",
917
+ "lstrip": false,
918
+ "normalized": false,
919
+ "rstrip": false,
920
+ "single_word": false,
921
+ "special": true
922
+ },
923
+ "128115": {
924
+ "content": "<|reserved_special_token_107|>",
925
+ "lstrip": false,
926
+ "normalized": false,
927
+ "rstrip": false,
928
+ "single_word": false,
929
+ "special": true
930
+ },
931
+ "128116": {
932
+ "content": "<|reserved_special_token_108|>",
933
+ "lstrip": false,
934
+ "normalized": false,
935
+ "rstrip": false,
936
+ "single_word": false,
937
+ "special": true
938
+ },
939
+ "128117": {
940
+ "content": "<|reserved_special_token_109|>",
941
+ "lstrip": false,
942
+ "normalized": false,
943
+ "rstrip": false,
944
+ "single_word": false,
945
+ "special": true
946
+ },
947
+ "128118": {
948
+ "content": "<|reserved_special_token_110|>",
949
+ "lstrip": false,
950
+ "normalized": false,
951
+ "rstrip": false,
952
+ "single_word": false,
953
+ "special": true
954
+ },
955
+ "128119": {
956
+ "content": "<|reserved_special_token_111|>",
957
+ "lstrip": false,
958
+ "normalized": false,
959
+ "rstrip": false,
960
+ "single_word": false,
961
+ "special": true
962
+ },
963
+ "128120": {
964
+ "content": "<|reserved_special_token_112|>",
965
+ "lstrip": false,
966
+ "normalized": false,
967
+ "rstrip": false,
968
+ "single_word": false,
969
+ "special": true
970
+ },
971
+ "128121": {
972
+ "content": "<|reserved_special_token_113|>",
973
+ "lstrip": false,
974
+ "normalized": false,
975
+ "rstrip": false,
976
+ "single_word": false,
977
+ "special": true
978
+ },
979
+ "128122": {
980
+ "content": "<|reserved_special_token_114|>",
981
+ "lstrip": false,
982
+ "normalized": false,
983
+ "rstrip": false,
984
+ "single_word": false,
985
+ "special": true
986
+ },
987
+ "128123": {
988
+ "content": "<|reserved_special_token_115|>",
989
+ "lstrip": false,
990
+ "normalized": false,
991
+ "rstrip": false,
992
+ "single_word": false,
993
+ "special": true
994
+ },
995
+ "128124": {
996
+ "content": "<|reserved_special_token_116|>",
997
+ "lstrip": false,
998
+ "normalized": false,
999
+ "rstrip": false,
1000
+ "single_word": false,
1001
+ "special": true
1002
+ },
1003
+ "128125": {
1004
+ "content": "<|reserved_special_token_117|>",
1005
+ "lstrip": false,
1006
+ "normalized": false,
1007
+ "rstrip": false,
1008
+ "single_word": false,
1009
+ "special": true
1010
+ },
1011
+ "128126": {
1012
+ "content": "<|reserved_special_token_118|>",
1013
+ "lstrip": false,
1014
+ "normalized": false,
1015
+ "rstrip": false,
1016
+ "single_word": false,
1017
+ "special": true
1018
+ },
1019
+ "128127": {
1020
+ "content": "<|reserved_special_token_119|>",
1021
+ "lstrip": false,
1022
+ "normalized": false,
1023
+ "rstrip": false,
1024
+ "single_word": false,
1025
+ "special": true
1026
+ },
1027
+ "128128": {
1028
+ "content": "<|reserved_special_token_120|>",
1029
+ "lstrip": false,
1030
+ "normalized": false,
1031
+ "rstrip": false,
1032
+ "single_word": false,
1033
+ "special": true
1034
+ },
1035
+ "128129": {
1036
+ "content": "<|reserved_special_token_121|>",
1037
+ "lstrip": false,
1038
+ "normalized": false,
1039
+ "rstrip": false,
1040
+ "single_word": false,
1041
+ "special": true
1042
+ },
1043
+ "128130": {
1044
+ "content": "<|reserved_special_token_122|>",
1045
+ "lstrip": false,
1046
+ "normalized": false,
1047
+ "rstrip": false,
1048
+ "single_word": false,
1049
+ "special": true
1050
+ },
1051
+ "128131": {
1052
+ "content": "<|reserved_special_token_123|>",
1053
+ "lstrip": false,
1054
+ "normalized": false,
1055
+ "rstrip": false,
1056
+ "single_word": false,
1057
+ "special": true
1058
+ },
1059
+ "128132": {
1060
+ "content": "<|reserved_special_token_124|>",
1061
+ "lstrip": false,
1062
+ "normalized": false,
1063
+ "rstrip": false,
1064
+ "single_word": false,
1065
+ "special": true
1066
+ },
1067
+ "128133": {
1068
+ "content": "<|reserved_special_token_125|>",
1069
+ "lstrip": false,
1070
+ "normalized": false,
1071
+ "rstrip": false,
1072
+ "single_word": false,
1073
+ "special": true
1074
+ },
1075
+ "128134": {
1076
+ "content": "<|reserved_special_token_126|>",
1077
+ "lstrip": false,
1078
+ "normalized": false,
1079
+ "rstrip": false,
1080
+ "single_word": false,
1081
+ "special": true
1082
+ },
1083
+ "128135": {
1084
+ "content": "<|reserved_special_token_127|>",
1085
+ "lstrip": false,
1086
+ "normalized": false,
1087
+ "rstrip": false,
1088
+ "single_word": false,
1089
+ "special": true
1090
+ },
1091
+ "128136": {
1092
+ "content": "<|reserved_special_token_128|>",
1093
+ "lstrip": false,
1094
+ "normalized": false,
1095
+ "rstrip": false,
1096
+ "single_word": false,
1097
+ "special": true
1098
+ },
1099
+ "128137": {
1100
+ "content": "<|reserved_special_token_129|>",
1101
+ "lstrip": false,
1102
+ "normalized": false,
1103
+ "rstrip": false,
1104
+ "single_word": false,
1105
+ "special": true
1106
+ },
1107
+ "128138": {
1108
+ "content": "<|reserved_special_token_130|>",
1109
+ "lstrip": false,
1110
+ "normalized": false,
1111
+ "rstrip": false,
1112
+ "single_word": false,
1113
+ "special": true
1114
+ },
1115
+ "128139": {
1116
+ "content": "<|reserved_special_token_131|>",
1117
+ "lstrip": false,
1118
+ "normalized": false,
1119
+ "rstrip": false,
1120
+ "single_word": false,
1121
+ "special": true
1122
+ },
1123
+ "128140": {
1124
+ "content": "<|reserved_special_token_132|>",
1125
+ "lstrip": false,
1126
+ "normalized": false,
1127
+ "rstrip": false,
1128
+ "single_word": false,
1129
+ "special": true
1130
+ },
1131
+ "128141": {
1132
+ "content": "<|reserved_special_token_133|>",
1133
+ "lstrip": false,
1134
+ "normalized": false,
1135
+ "rstrip": false,
1136
+ "single_word": false,
1137
+ "special": true
1138
+ },
1139
+ "128142": {
1140
+ "content": "<|reserved_special_token_134|>",
1141
+ "lstrip": false,
1142
+ "normalized": false,
1143
+ "rstrip": false,
1144
+ "single_word": false,
1145
+ "special": true
1146
+ },
1147
+ "128143": {
1148
+ "content": "<|reserved_special_token_135|>",
1149
+ "lstrip": false,
1150
+ "normalized": false,
1151
+ "rstrip": false,
1152
+ "single_word": false,
1153
+ "special": true
1154
+ },
1155
+ "128144": {
1156
+ "content": "<|reserved_special_token_136|>",
1157
+ "lstrip": false,
1158
+ "normalized": false,
1159
+ "rstrip": false,
1160
+ "single_word": false,
1161
+ "special": true
1162
+ },
1163
+ "128145": {
1164
+ "content": "<|reserved_special_token_137|>",
1165
+ "lstrip": false,
1166
+ "normalized": false,
1167
+ "rstrip": false,
1168
+ "single_word": false,
1169
+ "special": true
1170
+ },
1171
+ "128146": {
1172
+ "content": "<|reserved_special_token_138|>",
1173
+ "lstrip": false,
1174
+ "normalized": false,
1175
+ "rstrip": false,
1176
+ "single_word": false,
1177
+ "special": true
1178
+ },
1179
+ "128147": {
1180
+ "content": "<|reserved_special_token_139|>",
1181
+ "lstrip": false,
1182
+ "normalized": false,
1183
+ "rstrip": false,
1184
+ "single_word": false,
1185
+ "special": true
1186
+ },
1187
+ "128148": {
1188
+ "content": "<|reserved_special_token_140|>",
1189
+ "lstrip": false,
1190
+ "normalized": false,
1191
+ "rstrip": false,
1192
+ "single_word": false,
1193
+ "special": true
1194
+ },
1195
+ "128149": {
1196
+ "content": "<|reserved_special_token_141|>",
1197
+ "lstrip": false,
1198
+ "normalized": false,
1199
+ "rstrip": false,
1200
+ "single_word": false,
1201
+ "special": true
1202
+ },
1203
+ "128150": {
1204
+ "content": "<|reserved_special_token_142|>",
1205
+ "lstrip": false,
1206
+ "normalized": false,
1207
+ "rstrip": false,
1208
+ "single_word": false,
1209
+ "special": true
1210
+ },
1211
+ "128151": {
1212
+ "content": "<|reserved_special_token_143|>",
1213
+ "lstrip": false,
1214
+ "normalized": false,
1215
+ "rstrip": false,
1216
+ "single_word": false,
1217
+ "special": true
1218
+ },
1219
+ "128152": {
1220
+ "content": "<|reserved_special_token_144|>",
1221
+ "lstrip": false,
1222
+ "normalized": false,
1223
+ "rstrip": false,
1224
+ "single_word": false,
1225
+ "special": true
1226
+ },
1227
+ "128153": {
1228
+ "content": "<|reserved_special_token_145|>",
1229
+ "lstrip": false,
1230
+ "normalized": false,
1231
+ "rstrip": false,
1232
+ "single_word": false,
1233
+ "special": true
1234
+ },
1235
+ "128154": {
1236
+ "content": "<|reserved_special_token_146|>",
1237
+ "lstrip": false,
1238
+ "normalized": false,
1239
+ "rstrip": false,
1240
+ "single_word": false,
1241
+ "special": true
1242
+ },
1243
+ "128155": {
1244
+ "content": "<|reserved_special_token_147|>",
1245
+ "lstrip": false,
1246
+ "normalized": false,
1247
+ "rstrip": false,
1248
+ "single_word": false,
1249
+ "special": true
1250
+ },
1251
+ "128156": {
1252
+ "content": "<|reserved_special_token_148|>",
1253
+ "lstrip": false,
1254
+ "normalized": false,
1255
+ "rstrip": false,
1256
+ "single_word": false,
1257
+ "special": true
1258
+ },
1259
+ "128157": {
1260
+ "content": "<|reserved_special_token_149|>",
1261
+ "lstrip": false,
1262
+ "normalized": false,
1263
+ "rstrip": false,
1264
+ "single_word": false,
1265
+ "special": true
1266
+ },
1267
+ "128158": {
1268
+ "content": "<|reserved_special_token_150|>",
1269
+ "lstrip": false,
1270
+ "normalized": false,
1271
+ "rstrip": false,
1272
+ "single_word": false,
1273
+ "special": true
1274
+ },
1275
+ "128159": {
1276
+ "content": "<|reserved_special_token_151|>",
1277
+ "lstrip": false,
1278
+ "normalized": false,
1279
+ "rstrip": false,
1280
+ "single_word": false,
1281
+ "special": true
1282
+ },
1283
+ "128160": {
1284
+ "content": "<|reserved_special_token_152|>",
1285
+ "lstrip": false,
1286
+ "normalized": false,
1287
+ "rstrip": false,
1288
+ "single_word": false,
1289
+ "special": true
1290
+ },
1291
+ "128161": {
1292
+ "content": "<|reserved_special_token_153|>",
1293
+ "lstrip": false,
1294
+ "normalized": false,
1295
+ "rstrip": false,
1296
+ "single_word": false,
1297
+ "special": true
1298
+ },
1299
+ "128162": {
1300
+ "content": "<|reserved_special_token_154|>",
1301
+ "lstrip": false,
1302
+ "normalized": false,
1303
+ "rstrip": false,
1304
+ "single_word": false,
1305
+ "special": true
1306
+ },
1307
+ "128163": {
1308
+ "content": "<|reserved_special_token_155|>",
1309
+ "lstrip": false,
1310
+ "normalized": false,
1311
+ "rstrip": false,
1312
+ "single_word": false,
1313
+ "special": true
1314
+ },
1315
+ "128164": {
1316
+ "content": "<|reserved_special_token_156|>",
1317
+ "lstrip": false,
1318
+ "normalized": false,
1319
+ "rstrip": false,
1320
+ "single_word": false,
1321
+ "special": true
1322
+ },
1323
+ "128165": {
1324
+ "content": "<|reserved_special_token_157|>",
1325
+ "lstrip": false,
1326
+ "normalized": false,
1327
+ "rstrip": false,
1328
+ "single_word": false,
1329
+ "special": true
1330
+ },
1331
+ "128166": {
1332
+ "content": "<|reserved_special_token_158|>",
1333
+ "lstrip": false,
1334
+ "normalized": false,
1335
+ "rstrip": false,
1336
+ "single_word": false,
1337
+ "special": true
1338
+ },
1339
+ "128167": {
1340
+ "content": "<|reserved_special_token_159|>",
1341
+ "lstrip": false,
1342
+ "normalized": false,
1343
+ "rstrip": false,
1344
+ "single_word": false,
1345
+ "special": true
1346
+ },
1347
+ "128168": {
1348
+ "content": "<|reserved_special_token_160|>",
1349
+ "lstrip": false,
1350
+ "normalized": false,
1351
+ "rstrip": false,
1352
+ "single_word": false,
1353
+ "special": true
1354
+ },
1355
+ "128169": {
1356
+ "content": "<|reserved_special_token_161|>",
1357
+ "lstrip": false,
1358
+ "normalized": false,
1359
+ "rstrip": false,
1360
+ "single_word": false,
1361
+ "special": true
1362
+ },
1363
+ "128170": {
1364
+ "content": "<|reserved_special_token_162|>",
1365
+ "lstrip": false,
1366
+ "normalized": false,
1367
+ "rstrip": false,
1368
+ "single_word": false,
1369
+ "special": true
1370
+ },
1371
+ "128171": {
1372
+ "content": "<|reserved_special_token_163|>",
1373
+ "lstrip": false,
1374
+ "normalized": false,
1375
+ "rstrip": false,
1376
+ "single_word": false,
1377
+ "special": true
1378
+ },
1379
+ "128172": {
1380
+ "content": "<|reserved_special_token_164|>",
1381
+ "lstrip": false,
1382
+ "normalized": false,
1383
+ "rstrip": false,
1384
+ "single_word": false,
1385
+ "special": true
1386
+ },
1387
+ "128173": {
1388
+ "content": "<|reserved_special_token_165|>",
1389
+ "lstrip": false,
1390
+ "normalized": false,
1391
+ "rstrip": false,
1392
+ "single_word": false,
1393
+ "special": true
1394
+ },
1395
+ "128174": {
1396
+ "content": "<|reserved_special_token_166|>",
1397
+ "lstrip": false,
1398
+ "normalized": false,
1399
+ "rstrip": false,
1400
+ "single_word": false,
1401
+ "special": true
1402
+ },
1403
+ "128175": {
1404
+ "content": "<|reserved_special_token_167|>",
1405
+ "lstrip": false,
1406
+ "normalized": false,
1407
+ "rstrip": false,
1408
+ "single_word": false,
1409
+ "special": true
1410
+ },
1411
+ "128176": {
1412
+ "content": "<|reserved_special_token_168|>",
1413
+ "lstrip": false,
1414
+ "normalized": false,
1415
+ "rstrip": false,
1416
+ "single_word": false,
1417
+ "special": true
1418
+ },
1419
+ "128177": {
1420
+ "content": "<|reserved_special_token_169|>",
1421
+ "lstrip": false,
1422
+ "normalized": false,
1423
+ "rstrip": false,
1424
+ "single_word": false,
1425
+ "special": true
1426
+ },
1427
+ "128178": {
1428
+ "content": "<|reserved_special_token_170|>",
1429
+ "lstrip": false,
1430
+ "normalized": false,
1431
+ "rstrip": false,
1432
+ "single_word": false,
1433
+ "special": true
1434
+ },
1435
+ "128179": {
1436
+ "content": "<|reserved_special_token_171|>",
1437
+ "lstrip": false,
1438
+ "normalized": false,
1439
+ "rstrip": false,
1440
+ "single_word": false,
1441
+ "special": true
1442
+ },
1443
+ "128180": {
1444
+ "content": "<|reserved_special_token_172|>",
1445
+ "lstrip": false,
1446
+ "normalized": false,
1447
+ "rstrip": false,
1448
+ "single_word": false,
1449
+ "special": true
1450
+ },
1451
+ "128181": {
1452
+ "content": "<|reserved_special_token_173|>",
1453
+ "lstrip": false,
1454
+ "normalized": false,
1455
+ "rstrip": false,
1456
+ "single_word": false,
1457
+ "special": true
1458
+ },
1459
+ "128182": {
1460
+ "content": "<|reserved_special_token_174|>",
1461
+ "lstrip": false,
1462
+ "normalized": false,
1463
+ "rstrip": false,
1464
+ "single_word": false,
1465
+ "special": true
1466
+ },
1467
+ "128183": {
1468
+ "content": "<|reserved_special_token_175|>",
1469
+ "lstrip": false,
1470
+ "normalized": false,
1471
+ "rstrip": false,
1472
+ "single_word": false,
1473
+ "special": true
1474
+ },
1475
+ "128184": {
1476
+ "content": "<|reserved_special_token_176|>",
1477
+ "lstrip": false,
1478
+ "normalized": false,
1479
+ "rstrip": false,
1480
+ "single_word": false,
1481
+ "special": true
1482
+ },
1483
+ "128185": {
1484
+ "content": "<|reserved_special_token_177|>",
1485
+ "lstrip": false,
1486
+ "normalized": false,
1487
+ "rstrip": false,
1488
+ "single_word": false,
1489
+ "special": true
1490
+ },
1491
+ "128186": {
1492
+ "content": "<|reserved_special_token_178|>",
1493
+ "lstrip": false,
1494
+ "normalized": false,
1495
+ "rstrip": false,
1496
+ "single_word": false,
1497
+ "special": true
1498
+ },
1499
+ "128187": {
1500
+ "content": "<|reserved_special_token_179|>",
1501
+ "lstrip": false,
1502
+ "normalized": false,
1503
+ "rstrip": false,
1504
+ "single_word": false,
1505
+ "special": true
1506
+ },
1507
+ "128188": {
1508
+ "content": "<|reserved_special_token_180|>",
1509
+ "lstrip": false,
1510
+ "normalized": false,
1511
+ "rstrip": false,
1512
+ "single_word": false,
1513
+ "special": true
1514
+ },
1515
+ "128189": {
1516
+ "content": "<|reserved_special_token_181|>",
1517
+ "lstrip": false,
1518
+ "normalized": false,
1519
+ "rstrip": false,
1520
+ "single_word": false,
1521
+ "special": true
1522
+ },
1523
+ "128190": {
1524
+ "content": "<|reserved_special_token_182|>",
1525
+ "lstrip": false,
1526
+ "normalized": false,
1527
+ "rstrip": false,
1528
+ "single_word": false,
1529
+ "special": true
1530
+ },
1531
+ "128191": {
1532
+ "content": "<|reserved_special_token_183|>",
1533
+ "lstrip": false,
1534
+ "normalized": false,
1535
+ "rstrip": false,
1536
+ "single_word": false,
1537
+ "special": true
1538
+ },
1539
+ "128192": {
1540
+ "content": "<|reserved_special_token_184|>",
1541
+ "lstrip": false,
1542
+ "normalized": false,
1543
+ "rstrip": false,
1544
+ "single_word": false,
1545
+ "special": true
1546
+ },
1547
+ "128193": {
1548
+ "content": "<|reserved_special_token_185|>",
1549
+ "lstrip": false,
1550
+ "normalized": false,
1551
+ "rstrip": false,
1552
+ "single_word": false,
1553
+ "special": true
1554
+ },
1555
+ "128194": {
1556
+ "content": "<|reserved_special_token_186|>",
1557
+ "lstrip": false,
1558
+ "normalized": false,
1559
+ "rstrip": false,
1560
+ "single_word": false,
1561
+ "special": true
1562
+ },
1563
+ "128195": {
1564
+ "content": "<|reserved_special_token_187|>",
1565
+ "lstrip": false,
1566
+ "normalized": false,
1567
+ "rstrip": false,
1568
+ "single_word": false,
1569
+ "special": true
1570
+ },
1571
+ "128196": {
1572
+ "content": "<|reserved_special_token_188|>",
1573
+ "lstrip": false,
1574
+ "normalized": false,
1575
+ "rstrip": false,
1576
+ "single_word": false,
1577
+ "special": true
1578
+ },
1579
+ "128197": {
1580
+ "content": "<|reserved_special_token_189|>",
1581
+ "lstrip": false,
1582
+ "normalized": false,
1583
+ "rstrip": false,
1584
+ "single_word": false,
1585
+ "special": true
1586
+ },
1587
+ "128198": {
1588
+ "content": "<|reserved_special_token_190|>",
1589
+ "lstrip": false,
1590
+ "normalized": false,
1591
+ "rstrip": false,
1592
+ "single_word": false,
1593
+ "special": true
1594
+ },
1595
+ "128199": {
1596
+ "content": "<|reserved_special_token_191|>",
1597
+ "lstrip": false,
1598
+ "normalized": false,
1599
+ "rstrip": false,
1600
+ "single_word": false,
1601
+ "special": true
1602
+ },
1603
+ "128200": {
1604
+ "content": "<|reserved_special_token_192|>",
1605
+ "lstrip": false,
1606
+ "normalized": false,
1607
+ "rstrip": false,
1608
+ "single_word": false,
1609
+ "special": true
1610
+ },
1611
+ "128201": {
1612
+ "content": "<|reserved_special_token_193|>",
1613
+ "lstrip": false,
1614
+ "normalized": false,
1615
+ "rstrip": false,
1616
+ "single_word": false,
1617
+ "special": true
1618
+ },
1619
+ "128202": {
1620
+ "content": "<|reserved_special_token_194|>",
1621
+ "lstrip": false,
1622
+ "normalized": false,
1623
+ "rstrip": false,
1624
+ "single_word": false,
1625
+ "special": true
1626
+ },
1627
+ "128203": {
1628
+ "content": "<|reserved_special_token_195|>",
1629
+ "lstrip": false,
1630
+ "normalized": false,
1631
+ "rstrip": false,
1632
+ "single_word": false,
1633
+ "special": true
1634
+ },
1635
+ "128204": {
1636
+ "content": "<|reserved_special_token_196|>",
1637
+ "lstrip": false,
1638
+ "normalized": false,
1639
+ "rstrip": false,
1640
+ "single_word": false,
1641
+ "special": true
1642
+ },
1643
+ "128205": {
1644
+ "content": "<|reserved_special_token_197|>",
1645
+ "lstrip": false,
1646
+ "normalized": false,
1647
+ "rstrip": false,
1648
+ "single_word": false,
1649
+ "special": true
1650
+ },
1651
+ "128206": {
1652
+ "content": "<|reserved_special_token_198|>",
1653
+ "lstrip": false,
1654
+ "normalized": false,
1655
+ "rstrip": false,
1656
+ "single_word": false,
1657
+ "special": true
1658
+ },
1659
+ "128207": {
1660
+ "content": "<|reserved_special_token_199|>",
1661
+ "lstrip": false,
1662
+ "normalized": false,
1663
+ "rstrip": false,
1664
+ "single_word": false,
1665
+ "special": true
1666
+ },
1667
+ "128208": {
1668
+ "content": "<|reserved_special_token_200|>",
1669
+ "lstrip": false,
1670
+ "normalized": false,
1671
+ "rstrip": false,
1672
+ "single_word": false,
1673
+ "special": true
1674
+ },
1675
+ "128209": {
1676
+ "content": "<|reserved_special_token_201|>",
1677
+ "lstrip": false,
1678
+ "normalized": false,
1679
+ "rstrip": false,
1680
+ "single_word": false,
1681
+ "special": true
1682
+ },
1683
+ "128210": {
1684
+ "content": "<|reserved_special_token_202|>",
1685
+ "lstrip": false,
1686
+ "normalized": false,
1687
+ "rstrip": false,
1688
+ "single_word": false,
1689
+ "special": true
1690
+ },
1691
+ "128211": {
1692
+ "content": "<|reserved_special_token_203|>",
1693
+ "lstrip": false,
1694
+ "normalized": false,
1695
+ "rstrip": false,
1696
+ "single_word": false,
1697
+ "special": true
1698
+ },
1699
+ "128212": {
1700
+ "content": "<|reserved_special_token_204|>",
1701
+ "lstrip": false,
1702
+ "normalized": false,
1703
+ "rstrip": false,
1704
+ "single_word": false,
1705
+ "special": true
1706
+ },
1707
+ "128213": {
1708
+ "content": "<|reserved_special_token_205|>",
1709
+ "lstrip": false,
1710
+ "normalized": false,
1711
+ "rstrip": false,
1712
+ "single_word": false,
1713
+ "special": true
1714
+ },
1715
+ "128214": {
1716
+ "content": "<|reserved_special_token_206|>",
1717
+ "lstrip": false,
1718
+ "normalized": false,
1719
+ "rstrip": false,
1720
+ "single_word": false,
1721
+ "special": true
1722
+ },
1723
+ "128215": {
1724
+ "content": "<|reserved_special_token_207|>",
1725
+ "lstrip": false,
1726
+ "normalized": false,
1727
+ "rstrip": false,
1728
+ "single_word": false,
1729
+ "special": true
1730
+ },
1731
+ "128216": {
1732
+ "content": "<|reserved_special_token_208|>",
1733
+ "lstrip": false,
1734
+ "normalized": false,
1735
+ "rstrip": false,
1736
+ "single_word": false,
1737
+ "special": true
1738
+ },
1739
+ "128217": {
1740
+ "content": "<|reserved_special_token_209|>",
1741
+ "lstrip": false,
1742
+ "normalized": false,
1743
+ "rstrip": false,
1744
+ "single_word": false,
1745
+ "special": true
1746
+ },
1747
+ "128218": {
1748
+ "content": "<|reserved_special_token_210|>",
1749
+ "lstrip": false,
1750
+ "normalized": false,
1751
+ "rstrip": false,
1752
+ "single_word": false,
1753
+ "special": true
1754
+ },
1755
+ "128219": {
1756
+ "content": "<|reserved_special_token_211|>",
1757
+ "lstrip": false,
1758
+ "normalized": false,
1759
+ "rstrip": false,
1760
+ "single_word": false,
1761
+ "special": true
1762
+ },
1763
+ "128220": {
1764
+ "content": "<|reserved_special_token_212|>",
1765
+ "lstrip": false,
1766
+ "normalized": false,
1767
+ "rstrip": false,
1768
+ "single_word": false,
1769
+ "special": true
1770
+ },
1771
+ "128221": {
1772
+ "content": "<|reserved_special_token_213|>",
1773
+ "lstrip": false,
1774
+ "normalized": false,
1775
+ "rstrip": false,
1776
+ "single_word": false,
1777
+ "special": true
1778
+ },
1779
+ "128222": {
1780
+ "content": "<|reserved_special_token_214|>",
1781
+ "lstrip": false,
1782
+ "normalized": false,
1783
+ "rstrip": false,
1784
+ "single_word": false,
1785
+ "special": true
1786
+ },
1787
+ "128223": {
1788
+ "content": "<|reserved_special_token_215|>",
1789
+ "lstrip": false,
1790
+ "normalized": false,
1791
+ "rstrip": false,
1792
+ "single_word": false,
1793
+ "special": true
1794
+ },
1795
+ "128224": {
1796
+ "content": "<|reserved_special_token_216|>",
1797
+ "lstrip": false,
1798
+ "normalized": false,
1799
+ "rstrip": false,
1800
+ "single_word": false,
1801
+ "special": true
1802
+ },
1803
+ "128225": {
1804
+ "content": "<|reserved_special_token_217|>",
1805
+ "lstrip": false,
1806
+ "normalized": false,
1807
+ "rstrip": false,
1808
+ "single_word": false,
1809
+ "special": true
1810
+ },
1811
+ "128226": {
1812
+ "content": "<|reserved_special_token_218|>",
1813
+ "lstrip": false,
1814
+ "normalized": false,
1815
+ "rstrip": false,
1816
+ "single_word": false,
1817
+ "special": true
1818
+ },
1819
+ "128227": {
1820
+ "content": "<|reserved_special_token_219|>",
1821
+ "lstrip": false,
1822
+ "normalized": false,
1823
+ "rstrip": false,
1824
+ "single_word": false,
1825
+ "special": true
1826
+ },
1827
+ "128228": {
1828
+ "content": "<|reserved_special_token_220|>",
1829
+ "lstrip": false,
1830
+ "normalized": false,
1831
+ "rstrip": false,
1832
+ "single_word": false,
1833
+ "special": true
1834
+ },
1835
+ "128229": {
1836
+ "content": "<|reserved_special_token_221|>",
1837
+ "lstrip": false,
1838
+ "normalized": false,
1839
+ "rstrip": false,
1840
+ "single_word": false,
1841
+ "special": true
1842
+ },
1843
+ "128230": {
1844
+ "content": "<|reserved_special_token_222|>",
1845
+ "lstrip": false,
1846
+ "normalized": false,
1847
+ "rstrip": false,
1848
+ "single_word": false,
1849
+ "special": true
1850
+ },
1851
+ "128231": {
1852
+ "content": "<|reserved_special_token_223|>",
1853
+ "lstrip": false,
1854
+ "normalized": false,
1855
+ "rstrip": false,
1856
+ "single_word": false,
1857
+ "special": true
1858
+ },
1859
+ "128232": {
1860
+ "content": "<|reserved_special_token_224|>",
1861
+ "lstrip": false,
1862
+ "normalized": false,
1863
+ "rstrip": false,
1864
+ "single_word": false,
1865
+ "special": true
1866
+ },
1867
+ "128233": {
1868
+ "content": "<|reserved_special_token_225|>",
1869
+ "lstrip": false,
1870
+ "normalized": false,
1871
+ "rstrip": false,
1872
+ "single_word": false,
1873
+ "special": true
1874
+ },
1875
+ "128234": {
1876
+ "content": "<|reserved_special_token_226|>",
1877
+ "lstrip": false,
1878
+ "normalized": false,
1879
+ "rstrip": false,
1880
+ "single_word": false,
1881
+ "special": true
1882
+ },
1883
+ "128235": {
1884
+ "content": "<|reserved_special_token_227|>",
1885
+ "lstrip": false,
1886
+ "normalized": false,
1887
+ "rstrip": false,
1888
+ "single_word": false,
1889
+ "special": true
1890
+ },
1891
+ "128236": {
1892
+ "content": "<|reserved_special_token_228|>",
1893
+ "lstrip": false,
1894
+ "normalized": false,
1895
+ "rstrip": false,
1896
+ "single_word": false,
1897
+ "special": true
1898
+ },
1899
+ "128237": {
1900
+ "content": "<|reserved_special_token_229|>",
1901
+ "lstrip": false,
1902
+ "normalized": false,
1903
+ "rstrip": false,
1904
+ "single_word": false,
1905
+ "special": true
1906
+ },
1907
+ "128238": {
1908
+ "content": "<|reserved_special_token_230|>",
1909
+ "lstrip": false,
1910
+ "normalized": false,
1911
+ "rstrip": false,
1912
+ "single_word": false,
1913
+ "special": true
1914
+ },
1915
+ "128239": {
1916
+ "content": "<|reserved_special_token_231|>",
1917
+ "lstrip": false,
1918
+ "normalized": false,
1919
+ "rstrip": false,
1920
+ "single_word": false,
1921
+ "special": true
1922
+ },
1923
+ "128240": {
1924
+ "content": "<|reserved_special_token_232|>",
1925
+ "lstrip": false,
1926
+ "normalized": false,
1927
+ "rstrip": false,
1928
+ "single_word": false,
1929
+ "special": true
1930
+ },
1931
+ "128241": {
1932
+ "content": "<|reserved_special_token_233|>",
1933
+ "lstrip": false,
1934
+ "normalized": false,
1935
+ "rstrip": false,
1936
+ "single_word": false,
1937
+ "special": true
1938
+ },
1939
+ "128242": {
1940
+ "content": "<|reserved_special_token_234|>",
1941
+ "lstrip": false,
1942
+ "normalized": false,
1943
+ "rstrip": false,
1944
+ "single_word": false,
1945
+ "special": true
1946
+ },
1947
+ "128243": {
1948
+ "content": "<|reserved_special_token_235|>",
1949
+ "lstrip": false,
1950
+ "normalized": false,
1951
+ "rstrip": false,
1952
+ "single_word": false,
1953
+ "special": true
1954
+ },
1955
+ "128244": {
1956
+ "content": "<|reserved_special_token_236|>",
1957
+ "lstrip": false,
1958
+ "normalized": false,
1959
+ "rstrip": false,
1960
+ "single_word": false,
1961
+ "special": true
1962
+ },
1963
+ "128245": {
1964
+ "content": "<|reserved_special_token_237|>",
1965
+ "lstrip": false,
1966
+ "normalized": false,
1967
+ "rstrip": false,
1968
+ "single_word": false,
1969
+ "special": true
1970
+ },
1971
+ "128246": {
1972
+ "content": "<|reserved_special_token_238|>",
1973
+ "lstrip": false,
1974
+ "normalized": false,
1975
+ "rstrip": false,
1976
+ "single_word": false,
1977
+ "special": true
1978
+ },
1979
+ "128247": {
1980
+ "content": "<|reserved_special_token_239|>",
1981
+ "lstrip": false,
1982
+ "normalized": false,
1983
+ "rstrip": false,
1984
+ "single_word": false,
1985
+ "special": true
1986
+ },
1987
+ "128248": {
1988
+ "content": "<|reserved_special_token_240|>",
1989
+ "lstrip": false,
1990
+ "normalized": false,
1991
+ "rstrip": false,
1992
+ "single_word": false,
1993
+ "special": true
1994
+ },
1995
+ "128249": {
1996
+ "content": "<|reserved_special_token_241|>",
1997
+ "lstrip": false,
1998
+ "normalized": false,
1999
+ "rstrip": false,
2000
+ "single_word": false,
2001
+ "special": true
2002
+ },
2003
+ "128250": {
2004
+ "content": "<|reserved_special_token_242|>",
2005
+ "lstrip": false,
2006
+ "normalized": false,
2007
+ "rstrip": false,
2008
+ "single_word": false,
2009
+ "special": true
2010
+ },
2011
+ "128251": {
2012
+ "content": "<|reserved_special_token_243|>",
2013
+ "lstrip": false,
2014
+ "normalized": false,
2015
+ "rstrip": false,
2016
+ "single_word": false,
2017
+ "special": true
2018
+ },
2019
+ "128252": {
2020
+ "content": "<|reserved_special_token_244|>",
2021
+ "lstrip": false,
2022
+ "normalized": false,
2023
+ "rstrip": false,
2024
+ "single_word": false,
2025
+ "special": true
2026
+ },
2027
+ "128253": {
2028
+ "content": "<|reserved_special_token_245|>",
2029
+ "lstrip": false,
2030
+ "normalized": false,
2031
+ "rstrip": false,
2032
+ "single_word": false,
2033
+ "special": true
2034
+ },
2035
+ "128254": {
2036
+ "content": "<|reserved_special_token_246|>",
2037
+ "lstrip": false,
2038
+ "normalized": false,
2039
+ "rstrip": false,
2040
+ "single_word": false,
2041
+ "special": true
2042
+ },
2043
+ "128255": {
2044
+ "content": "<|reserved_special_token_247|>",
2045
+ "lstrip": false,
2046
+ "normalized": false,
2047
+ "rstrip": false,
2048
+ "single_word": false,
2049
+ "special": true
2050
+ }
2051
+ },
2052
+ "bos_token": "<|begin_of_text|>",
2053
+ "chat_template": "{{- bos_token }}\n{%- if custom_tools is defined %}\n {%- set tools = custom_tools %}\n{%- endif %}\n{%- if not tools_in_user_message is defined %}\n {%- set tools_in_user_message = true %}\n{%- endif %}\n{%- if not date_string is defined %}\n {%- if strftime_now is defined %}\n {%- set date_string = strftime_now(\"%d %b %Y\") %}\n {%- else %}\n {%- set date_string = \"26 Jul 2024\" %}\n {%- endif %}\n{%- endif %}\n{%- if not tools is defined %}\n {%- set tools = none %}\n{%- endif %}\n\n{#- This block extracts the system message, so we can slot it into the right place. #}\n{%- if messages[0]['role'] == 'system' %}\n {%- set system_message = messages[0]['content']|trim %}\n {%- set messages = messages[1:] %}\n{%- else %}\n {%- set system_message = \"\" %}\n{%- endif %}\n\n{#- System message #}\n{{- \"<|start_header_id|>system<|end_header_id|>\\n\\n\" }}\n{%- if tools is not none %}\n {{- \"Environment: ipython\\n\" }}\n{%- endif %}\n{{- \"Cutting Knowledge Date: December 2023\\n\" }}\n{{- \"Today Date: \" + date_string + \"\\n\\n\" }}\n{%- if tools is not none and not tools_in_user_message %}\n {{- \"You have access to the following functions. To call a function, please respond with JSON for a function call.\" }}\n {{- 'Respond in the format {\"name\": function name, \"parameters\": dictionary of argument name and its value}.' }}\n {{- \"Do not use variables.\\n\\n\" }}\n {%- for t in tools %}\n {{- t | tojson(indent=4) }}\n {{- \"\\n\\n\" }}\n {%- endfor %}\n{%- endif %}\n{{- system_message }}\n{{- \"<|eot_id|>\" }}\n\n{#- Custom tools are passed in a user message with some extra guidance #}\n{%- if tools_in_user_message and not tools is none %}\n {#- Extract the first user message so we can plug it in here #}\n {%- if messages | length != 0 %}\n {%- set first_user_message = messages[0]['content']|trim %}\n {%- set messages = messages[1:] %}\n {%- else %}\n {{- raise_exception(\"Cannot put tools in the first user message when there's no first user message!\") }}\n{%- endif %}\n {{- '<|start_header_id|>user<|end_header_id|>\\n\\n' -}}\n {{- \"Given the following functions, please respond with a JSON for a function call \" }}\n {{- \"with its proper arguments that best answers the given prompt.\\n\\n\" }}\n {{- 'Respond in the format {\"name\": function name, \"parameters\": dictionary of argument name and its value}.' }}\n {{- \"Do not use variables.\\n\\n\" }}\n {%- for t in tools %}\n {{- t | tojson(indent=4) }}\n {{- \"\\n\\n\" }}\n {%- endfor %}\n {{- first_user_message + \"<|eot_id|>\"}}\n{%- endif %}\n\n{%- for message in messages %}\n {%- if not (message.role == 'ipython' or message.role == 'tool' or 'tool_calls' in message) %}\n {{- '<|start_header_id|>' + message['role'] + '<|end_header_id|>\\n\\n'+ message['content'] | trim + '<|eot_id|>' }}\n {%- elif 'tool_calls' in message %}\n {%- if not message.tool_calls|length == 1 %}\n {{- raise_exception(\"This model only supports single tool-calls at once!\") }}\n {%- endif %}\n {%- set tool_call = message.tool_calls[0].function %}\n {{- '<|start_header_id|>assistant<|end_header_id|>\\n\\n' -}}\n {{- '{\"name\": \"' + tool_call.name + '\", ' }}\n {{- '\"parameters\": ' }}\n {{- tool_call.arguments | tojson }}\n {{- \"}\" }}\n {{- \"<|eot_id|>\" }}\n {%- elif message.role == \"tool\" or message.role == \"ipython\" %}\n {{- \"<|start_header_id|>ipython<|end_header_id|>\\n\\n\" }}\n {%- if message.content is mapping or message.content is iterable %}\n {{- message.content | tojson }}\n {%- else %}\n {{- message.content }}\n {%- endif %}\n {{- \"<|eot_id|>\" }}\n {%- endif %}\n{%- endfor %}\n{%- if add_generation_prompt %}\n {{- '<|start_header_id|>assistant<|end_header_id|>\\n\\n' }}\n{%- endif %}\n",
2054
+ "clean_up_tokenization_spaces": true,
2055
+ "eos_token": "<|eot_id|>",
2056
+ "model_input_names": [
2057
+ "input_ids",
2058
+ "attention_mask"
2059
+ ],
2060
+ "model_max_length": 131072,
2061
+ "tokenizer_class": "PreTrainedTokenizerFast"
2062
+ }