Upload folder using huggingface_hub
Browse filesThis view is limited to 50 files because it contains too many changes.
See raw diff
- .gitattributes +2 -34
- README.md +175 -0
- _model_model_Constant_2_attr__value +1 -0
- _model_model_Constant_attr__value +0 -0
- added_tokens.json +13 -0
- config.json +141 -0
- model.model.embed_tokens.weight +3 -0
- model.model.layers.0.input_layernorm.weight +0 -0
- model.model.layers.0.post_attention_layernorm.weight +0 -0
- model.model.layers.1.input_layernorm.weight +0 -0
- model.model.layers.1.post_attention_layernorm.weight +0 -0
- model.model.layers.10.input_layernorm.weight +0 -0
- model.model.layers.10.post_attention_layernorm.weight +0 -0
- model.model.layers.11.input_layernorm.weight +0 -0
- model.model.layers.11.post_attention_layernorm.weight +0 -0
- model.model.layers.12.input_layernorm.weight +0 -0
- model.model.layers.12.post_attention_layernorm.weight +0 -0
- model.model.layers.13.input_layernorm.weight +0 -0
- model.model.layers.13.post_attention_layernorm.weight +0 -0
- model.model.layers.14.input_layernorm.weight +0 -0
- model.model.layers.14.post_attention_layernorm.weight +0 -0
- model.model.layers.15.input_layernorm.weight +0 -0
- model.model.layers.15.post_attention_layernorm.weight +0 -0
- model.model.layers.16.input_layernorm.weight +0 -0
- model.model.layers.16.post_attention_layernorm.weight +0 -0
- model.model.layers.17.input_layernorm.weight +0 -0
- model.model.layers.17.post_attention_layernorm.weight +0 -0
- model.model.layers.18.input_layernorm.weight +0 -0
- model.model.layers.18.post_attention_layernorm.weight +0 -0
- model.model.layers.19.input_layernorm.weight +0 -0
- model.model.layers.19.post_attention_layernorm.weight +0 -0
- model.model.layers.2.input_layernorm.weight +0 -0
- model.model.layers.2.post_attention_layernorm.weight +0 -0
- model.model.layers.20.input_layernorm.weight +0 -0
- model.model.layers.20.post_attention_layernorm.weight +0 -0
- model.model.layers.21.input_layernorm.weight +0 -0
- model.model.layers.21.post_attention_layernorm.weight +0 -0
- model.model.layers.22.input_layernorm.weight +0 -0
- model.model.layers.22.post_attention_layernorm.weight +0 -0
- model.model.layers.23.input_layernorm.weight +0 -0
- model.model.layers.23.post_attention_layernorm.weight +0 -0
- model.model.layers.24.input_layernorm.weight +0 -0
- model.model.layers.24.post_attention_layernorm.weight +0 -0
- model.model.layers.25.input_layernorm.weight +0 -0
- model.model.layers.25.post_attention_layernorm.weight +0 -0
- model.model.layers.26.input_layernorm.weight +0 -0
- model.model.layers.26.post_attention_layernorm.weight +0 -0
- model.model.layers.27.input_layernorm.weight +0 -0
- model.model.layers.27.post_attention_layernorm.weight +0 -0
- model.model.layers.28.input_layernorm.weight +0 -0
.gitattributes
CHANGED
@@ -1,35 +1,3 @@
|
|
1 |
-
*.7z filter=lfs diff=lfs merge=lfs -text
|
2 |
-
*.arrow filter=lfs diff=lfs merge=lfs -text
|
3 |
-
*.bin filter=lfs diff=lfs merge=lfs -text
|
4 |
-
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
5 |
-
*.ckpt filter=lfs diff=lfs merge=lfs -text
|
6 |
-
*.ftz filter=lfs diff=lfs merge=lfs -text
|
7 |
-
*.gz filter=lfs diff=lfs merge=lfs -text
|
8 |
-
*.h5 filter=lfs diff=lfs merge=lfs -text
|
9 |
-
*.joblib filter=lfs diff=lfs merge=lfs -text
|
10 |
-
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
11 |
-
*.mlmodel filter=lfs diff=lfs merge=lfs -text
|
12 |
-
*.model filter=lfs diff=lfs merge=lfs -text
|
13 |
-
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
14 |
-
*.npy filter=lfs diff=lfs merge=lfs -text
|
15 |
-
*.npz filter=lfs diff=lfs merge=lfs -text
|
16 |
*.onnx filter=lfs diff=lfs merge=lfs -text
|
17 |
-
|
18 |
-
*.
|
19 |
-
*.pb filter=lfs diff=lfs merge=lfs -text
|
20 |
-
*.pickle filter=lfs diff=lfs merge=lfs -text
|
21 |
-
*.pkl filter=lfs diff=lfs merge=lfs -text
|
22 |
-
*.pt filter=lfs diff=lfs merge=lfs -text
|
23 |
-
*.pth filter=lfs diff=lfs merge=lfs -text
|
24 |
-
*.rar filter=lfs diff=lfs merge=lfs -text
|
25 |
-
*.safetensors filter=lfs diff=lfs merge=lfs -text
|
26 |
-
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
27 |
-
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
28 |
-
*.tar filter=lfs diff=lfs merge=lfs -text
|
29 |
-
*.tflite filter=lfs diff=lfs merge=lfs -text
|
30 |
-
*.tgz filter=lfs diff=lfs merge=lfs -text
|
31 |
-
*.wasm filter=lfs diff=lfs merge=lfs -text
|
32 |
-
*.xz filter=lfs diff=lfs merge=lfs -text
|
33 |
-
*.zip filter=lfs diff=lfs merge=lfs -text
|
34 |
-
*.zst filter=lfs diff=lfs merge=lfs -text
|
35 |
-
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
*.onnx filter=lfs diff=lfs merge=lfs -text
|
2 |
+
onnx__MatMul_* filter=lfs diff=lfs merge=lfs -text
|
3 |
+
*.weight filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
README.md
ADDED
@@ -0,0 +1,175 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Phi-3.5 Mini Instruct - Quantized ONNX Model (Consolidated)
|
2 |
+
|
3 |
+
## 🚀 Model Overview
|
4 |
+
This is Microsoft's Phi-3.5-mini-instruct model, quantized to INT8 and optimized for Qualcomm Snapdragon NPU deployment. This version consolidates all files into a single directory for easier deployment.
|
5 |
+
|
6 |
+
## 📊 Model Specifications
|
7 |
+
- **Base Model**: microsoft/Phi-3.5-mini-instruct
|
8 |
+
- **Size**: 7292.4 MB (quantized from 7.3GB original)
|
9 |
+
- **Compression**: 50% size reduction
|
10 |
+
- **Format**: ONNX INT8 quantized with external data
|
11 |
+
- **Files**: 203 files total
|
12 |
+
- **Target**: Qualcomm Snapdragon NPUs
|
13 |
+
|
14 |
+
## 🔧 Quick Start
|
15 |
+
|
16 |
+
### Installation
|
17 |
+
```bash
|
18 |
+
pip install onnxruntime transformers numpy
|
19 |
+
```
|
20 |
+
|
21 |
+
### Basic Usage
|
22 |
+
```python
|
23 |
+
import onnxruntime as ort
|
24 |
+
from transformers import AutoTokenizer
|
25 |
+
import numpy as np
|
26 |
+
|
27 |
+
# Load tokenizer
|
28 |
+
tokenizer = AutoTokenizer.from_pretrained(".", trust_remote_code=True)
|
29 |
+
|
30 |
+
# Load ONNX model
|
31 |
+
session = ort.InferenceSession("model.onnx")
|
32 |
+
|
33 |
+
# Prepare input
|
34 |
+
text = "Hello, what is artificial intelligence?"
|
35 |
+
inputs = tokenizer(text, return_tensors="np", max_length=64, truncation=True, padding="max_length")
|
36 |
+
|
37 |
+
# Run inference
|
38 |
+
outputs = session.run(None, {"input_ids": inputs["input_ids"]})
|
39 |
+
logits = outputs[0]
|
40 |
+
|
41 |
+
print(f"Input: {text}")
|
42 |
+
print(f"Output shape: {logits.shape}")
|
43 |
+
```
|
44 |
+
|
45 |
+
### Text Generation Example
|
46 |
+
```python
|
47 |
+
def generate_response(prompt, max_new_tokens=50):
|
48 |
+
# Tokenize
|
49 |
+
inputs = tokenizer(prompt, return_tensors="np", max_length=64, truncation=True)
|
50 |
+
input_ids = inputs["input_ids"]
|
51 |
+
|
52 |
+
generated_tokens = []
|
53 |
+
|
54 |
+
for _ in range(max_new_tokens):
|
55 |
+
# Get model prediction
|
56 |
+
outputs = session.run(None, {"input_ids": input_ids})
|
57 |
+
logits = outputs[0]
|
58 |
+
|
59 |
+
# Get next token (greedy)
|
60 |
+
next_token_id = np.argmax(logits[0, -1, :])
|
61 |
+
generated_tokens.append(next_token_id)
|
62 |
+
|
63 |
+
# Stop on EOS
|
64 |
+
if next_token_id == tokenizer.eos_token_id:
|
65 |
+
break
|
66 |
+
|
67 |
+
# Add to input for next iteration
|
68 |
+
input_ids = np.concatenate([input_ids, [[next_token_id]]], axis=1)
|
69 |
+
|
70 |
+
# Decode response
|
71 |
+
response = tokenizer.decode(generated_tokens, skip_special_tokens=True)
|
72 |
+
return response
|
73 |
+
|
74 |
+
# Example
|
75 |
+
response = generate_response("What is machine learning?")
|
76 |
+
print(f"Response: {response}")
|
77 |
+
```
|
78 |
+
|
79 |
+
## 🧪 Testing Script
|
80 |
+
```python
|
81 |
+
#!/usr/bin/env python3
|
82 |
+
import onnxruntime as ort
|
83 |
+
from transformers import AutoTokenizer
|
84 |
+
import numpy as np
|
85 |
+
|
86 |
+
def test_model():
|
87 |
+
print("🔄 Loading model...")
|
88 |
+
tokenizer = AutoTokenizer.from_pretrained(".", trust_remote_code=True)
|
89 |
+
session = ort.InferenceSession("model.onnx")
|
90 |
+
|
91 |
+
test_cases = [
|
92 |
+
"Hello, how are you?",
|
93 |
+
"What is the capital of France?",
|
94 |
+
"Explain artificial intelligence in simple terms."
|
95 |
+
]
|
96 |
+
|
97 |
+
for i, text in enumerate(test_cases, 1):
|
98 |
+
print(f"\n{i}. Input: {text}")
|
99 |
+
|
100 |
+
inputs = tokenizer(text, return_tensors="np", max_length=64,
|
101 |
+
truncation=True, padding="max_length")
|
102 |
+
outputs = session.run(None, {"input_ids": inputs["input_ids"]})
|
103 |
+
|
104 |
+
print(f" ✅ Output shape: {outputs[0].shape}")
|
105 |
+
|
106 |
+
print("\n🎉 All tests passed!")
|
107 |
+
|
108 |
+
if __name__ == "__main__":
|
109 |
+
test_model()
|
110 |
+
```
|
111 |
+
|
112 |
+
## ⚡ Performance Expectations
|
113 |
+
- **Inference Speed**: 2-3x faster than CPU on Snapdragon NPUs
|
114 |
+
- **Memory Usage**: ~4GB RAM required
|
115 |
+
- **Tokens/Second**: 8-15 on Snapdragon 8cx Gen 2
|
116 |
+
- **Latency**: <100ms for short sequences
|
117 |
+
|
118 |
+
## 📁 File Structure
|
119 |
+
```
|
120 |
+
model.onnx # Main ONNX model file
|
121 |
+
tokenizer.json # Tokenizer vocabulary
|
122 |
+
tokenizer_config.json # Tokenizer configuration
|
123 |
+
config.json # Model configuration
|
124 |
+
onnx__MatMul_* # External weight data files (129 files)
|
125 |
+
*.weight # Additional model weights
|
126 |
+
```
|
127 |
+
|
128 |
+
## ⚠️ Important Notes
|
129 |
+
|
130 |
+
1. **All Files Required**: Keep all files in the same directory. The model.onnx file references external data files.
|
131 |
+
|
132 |
+
2. **Memory Requirements**: Ensure you have at least 4GB of available RAM.
|
133 |
+
|
134 |
+
3. **Qualcomm NPU Setup**: For optimal performance on Qualcomm hardware:
|
135 |
+
```python
|
136 |
+
# Use QNN execution provider (when available)
|
137 |
+
providers = ['QNNExecutionProvider', 'CPUExecutionProvider']
|
138 |
+
session = ort.InferenceSession("model.onnx", providers=providers)
|
139 |
+
```
|
140 |
+
|
141 |
+
## 🚀 Deployment on Qualcomm Devices
|
142 |
+
|
143 |
+
### Windows on ARM
|
144 |
+
1. Copy all files to your device
|
145 |
+
2. Install ONNX Runtime: `pip install onnxruntime`
|
146 |
+
3. Run the test script to verify
|
147 |
+
|
148 |
+
### Android (with QNN SDK)
|
149 |
+
1. Use ONNX Runtime Mobile with QNN support
|
150 |
+
2. Package all files in your app bundle
|
151 |
+
3. Initialize with QNN execution provider
|
152 |
+
|
153 |
+
## 🐛 Troubleshooting
|
154 |
+
|
155 |
+
**Model fails to load:**
|
156 |
+
- Ensure all files are in the same directory
|
157 |
+
- Check that you have sufficient RAM (4GB+)
|
158 |
+
|
159 |
+
**Slow inference:**
|
160 |
+
- Try enabling graph optimizations:
|
161 |
+
```python
|
162 |
+
sess_options = ort.SessionOptions()
|
163 |
+
sess_options.graph_optimization_level = ort.GraphOptimizationLevel.ORT_ENABLE_ALL
|
164 |
+
session = ort.InferenceSession("model.onnx", sess_options)
|
165 |
+
```
|
166 |
+
|
167 |
+
**Out of memory:**
|
168 |
+
- Reduce sequence length: `max_length=32`
|
169 |
+
- Process smaller batches
|
170 |
+
|
171 |
+
## 📄 License
|
172 |
+
This model inherits the license from microsoft/Phi-3.5-mini-instruct.
|
173 |
+
|
174 |
+
---
|
175 |
+
*Quantized and optimized for Qualcomm Snapdragon NPU deployment*
|
_model_model_Constant_2_attr__value
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
|
_model_model_Constant_attr__value
ADDED
Binary file (8.19 kB). View file
|
|
added_tokens.json
ADDED
@@ -0,0 +1,13 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"<|assistant|>": 32001,
|
3 |
+
"<|endoftext|>": 32000,
|
4 |
+
"<|end|>": 32007,
|
5 |
+
"<|placeholder1|>": 32002,
|
6 |
+
"<|placeholder2|>": 32003,
|
7 |
+
"<|placeholder3|>": 32004,
|
8 |
+
"<|placeholder4|>": 32005,
|
9 |
+
"<|placeholder5|>": 32008,
|
10 |
+
"<|placeholder6|>": 32009,
|
11 |
+
"<|system|>": 32006,
|
12 |
+
"<|user|>": 32010
|
13 |
+
}
|
config.json
ADDED
@@ -0,0 +1,141 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"architectures": [
|
3 |
+
"Phi3ForCausalLM"
|
4 |
+
],
|
5 |
+
"attention_bias": false,
|
6 |
+
"attention_dropout": 0.0,
|
7 |
+
"auto_map": {
|
8 |
+
"AutoConfig": "configuration_phi3.Phi3Config",
|
9 |
+
"AutoModelForCausalLM": "modeling_phi3.Phi3ForCausalLM"
|
10 |
+
},
|
11 |
+
"bos_token_id": 1,
|
12 |
+
"dtype": "bfloat16",
|
13 |
+
"embd_pdrop": 0.0,
|
14 |
+
"eos_token_id": 32000,
|
15 |
+
"hidden_act": "silu",
|
16 |
+
"hidden_size": 3072,
|
17 |
+
"initializer_range": 0.02,
|
18 |
+
"intermediate_size": 8192,
|
19 |
+
"max_position_embeddings": 131072,
|
20 |
+
"model_type": "phi3",
|
21 |
+
"num_attention_heads": 32,
|
22 |
+
"num_hidden_layers": 32,
|
23 |
+
"num_key_value_heads": 32,
|
24 |
+
"original_max_position_embeddings": 4096,
|
25 |
+
"pad_token_id": 32000,
|
26 |
+
"quantization_config": {
|
27 |
+
"bits": 8,
|
28 |
+
"quant_method": "onnx_dynamic_int8"
|
29 |
+
},
|
30 |
+
"resid_pdrop": 0.0,
|
31 |
+
"rms_norm_eps": 1e-05,
|
32 |
+
"rope_scaling": {
|
33 |
+
"long_factor": [
|
34 |
+
1.0800000429153442,
|
35 |
+
1.1100000143051147,
|
36 |
+
1.1399999856948853,
|
37 |
+
1.340000033378601,
|
38 |
+
1.5899999141693115,
|
39 |
+
1.600000023841858,
|
40 |
+
1.6200000047683716,
|
41 |
+
2.620000123977661,
|
42 |
+
3.2300000190734863,
|
43 |
+
3.2300000190734863,
|
44 |
+
4.789999961853027,
|
45 |
+
7.400000095367432,
|
46 |
+
7.700000286102295,
|
47 |
+
9.09000015258789,
|
48 |
+
12.199999809265137,
|
49 |
+
17.670000076293945,
|
50 |
+
24.46000099182129,
|
51 |
+
28.57000160217285,
|
52 |
+
30.420001983642578,
|
53 |
+
30.840002059936523,
|
54 |
+
32.590003967285156,
|
55 |
+
32.93000411987305,
|
56 |
+
42.320003509521484,
|
57 |
+
44.96000289916992,
|
58 |
+
50.340003967285156,
|
59 |
+
50.45000457763672,
|
60 |
+
57.55000305175781,
|
61 |
+
57.93000411987305,
|
62 |
+
58.21000289916992,
|
63 |
+
60.1400032043457,
|
64 |
+
62.61000442504883,
|
65 |
+
62.62000274658203,
|
66 |
+
62.71000289916992,
|
67 |
+
63.1400032043457,
|
68 |
+
63.1400032043457,
|
69 |
+
63.77000427246094,
|
70 |
+
63.93000411987305,
|
71 |
+
63.96000289916992,
|
72 |
+
63.970001220703125,
|
73 |
+
64.02999877929688,
|
74 |
+
64.06999969482422,
|
75 |
+
64.08000183105469,
|
76 |
+
64.12000274658203,
|
77 |
+
64.41000366210938,
|
78 |
+
64.4800033569336,
|
79 |
+
64.51000213623047,
|
80 |
+
64.52999877929688,
|
81 |
+
64.83999633789062
|
82 |
+
],
|
83 |
+
"short_factor": [
|
84 |
+
1.0,
|
85 |
+
1.0199999809265137,
|
86 |
+
1.0299999713897705,
|
87 |
+
1.0299999713897705,
|
88 |
+
1.0499999523162842,
|
89 |
+
1.0499999523162842,
|
90 |
+
1.0499999523162842,
|
91 |
+
1.0499999523162842,
|
92 |
+
1.0499999523162842,
|
93 |
+
1.0699999332427979,
|
94 |
+
1.0999999046325684,
|
95 |
+
1.1099998950958252,
|
96 |
+
1.1599998474121094,
|
97 |
+
1.1599998474121094,
|
98 |
+
1.1699998378753662,
|
99 |
+
1.2899998426437378,
|
100 |
+
1.339999794960022,
|
101 |
+
1.679999828338623,
|
102 |
+
1.7899998426437378,
|
103 |
+
1.8199998140335083,
|
104 |
+
1.8499997854232788,
|
105 |
+
1.8799997568130493,
|
106 |
+
1.9099997282028198,
|
107 |
+
1.9399996995925903,
|
108 |
+
1.9899996519088745,
|
109 |
+
2.0199997425079346,
|
110 |
+
2.0199997425079346,
|
111 |
+
2.0199997425079346,
|
112 |
+
2.0199997425079346,
|
113 |
+
2.0199997425079346,
|
114 |
+
2.0199997425079346,
|
115 |
+
2.0299997329711914,
|
116 |
+
2.0299997329711914,
|
117 |
+
2.0299997329711914,
|
118 |
+
2.0299997329711914,
|
119 |
+
2.0299997329711914,
|
120 |
+
2.0299997329711914,
|
121 |
+
2.0299997329711914,
|
122 |
+
2.0299997329711914,
|
123 |
+
2.0299997329711914,
|
124 |
+
2.0799996852874756,
|
125 |
+
2.0899996757507324,
|
126 |
+
2.189999580383301,
|
127 |
+
2.2199995517730713,
|
128 |
+
2.5899994373321533,
|
129 |
+
2.729999542236328,
|
130 |
+
2.749999523162842,
|
131 |
+
2.8399994373321533
|
132 |
+
],
|
133 |
+
"type": "longrope"
|
134 |
+
},
|
135 |
+
"rope_theta": 10000.0,
|
136 |
+
"sliding_window": 262144,
|
137 |
+
"tie_word_embeddings": false,
|
138 |
+
"transformers_version": "4.56.0",
|
139 |
+
"use_cache": true,
|
140 |
+
"vocab_size": 32064
|
141 |
+
}
|
model.model.embed_tokens.weight
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:174450eec07a09e0e7e13016ad8361016ba737d1b85ec80dbf9342faee3ef23d
|
3 |
+
size 197001216
|
model.model.layers.0.input_layernorm.weight
ADDED
Binary file (6.14 kB). View file
|
|
model.model.layers.0.post_attention_layernorm.weight
ADDED
Binary file (6.14 kB). View file
|
|
model.model.layers.1.input_layernorm.weight
ADDED
Binary file (6.14 kB). View file
|
|
model.model.layers.1.post_attention_layernorm.weight
ADDED
Binary file (6.14 kB). View file
|
|
model.model.layers.10.input_layernorm.weight
ADDED
Binary file (6.14 kB). View file
|
|
model.model.layers.10.post_attention_layernorm.weight
ADDED
Binary file (6.14 kB). View file
|
|
model.model.layers.11.input_layernorm.weight
ADDED
Binary file (6.14 kB). View file
|
|
model.model.layers.11.post_attention_layernorm.weight
ADDED
Binary file (6.14 kB). View file
|
|
model.model.layers.12.input_layernorm.weight
ADDED
Binary file (6.14 kB). View file
|
|
model.model.layers.12.post_attention_layernorm.weight
ADDED
Binary file (6.14 kB). View file
|
|
model.model.layers.13.input_layernorm.weight
ADDED
Binary file (6.14 kB). View file
|
|
model.model.layers.13.post_attention_layernorm.weight
ADDED
Binary file (6.14 kB). View file
|
|
model.model.layers.14.input_layernorm.weight
ADDED
Binary file (6.14 kB). View file
|
|
model.model.layers.14.post_attention_layernorm.weight
ADDED
Binary file (6.14 kB). View file
|
|
model.model.layers.15.input_layernorm.weight
ADDED
Binary file (6.14 kB). View file
|
|
model.model.layers.15.post_attention_layernorm.weight
ADDED
Binary file (6.14 kB). View file
|
|
model.model.layers.16.input_layernorm.weight
ADDED
Binary file (6.14 kB). View file
|
|
model.model.layers.16.post_attention_layernorm.weight
ADDED
Binary file (6.14 kB). View file
|
|
model.model.layers.17.input_layernorm.weight
ADDED
Binary file (6.14 kB). View file
|
|
model.model.layers.17.post_attention_layernorm.weight
ADDED
Binary file (6.14 kB). View file
|
|
model.model.layers.18.input_layernorm.weight
ADDED
Binary file (6.14 kB). View file
|
|
model.model.layers.18.post_attention_layernorm.weight
ADDED
Binary file (6.14 kB). View file
|
|
model.model.layers.19.input_layernorm.weight
ADDED
Binary file (6.14 kB). View file
|
|
model.model.layers.19.post_attention_layernorm.weight
ADDED
Binary file (6.14 kB). View file
|
|
model.model.layers.2.input_layernorm.weight
ADDED
Binary file (6.14 kB). View file
|
|
model.model.layers.2.post_attention_layernorm.weight
ADDED
Binary file (6.14 kB). View file
|
|
model.model.layers.20.input_layernorm.weight
ADDED
Binary file (6.14 kB). View file
|
|
model.model.layers.20.post_attention_layernorm.weight
ADDED
Binary file (6.14 kB). View file
|
|
model.model.layers.21.input_layernorm.weight
ADDED
Binary file (6.14 kB). View file
|
|
model.model.layers.21.post_attention_layernorm.weight
ADDED
Binary file (6.14 kB). View file
|
|
model.model.layers.22.input_layernorm.weight
ADDED
Binary file (6.14 kB). View file
|
|
model.model.layers.22.post_attention_layernorm.weight
ADDED
Binary file (6.14 kB). View file
|
|
model.model.layers.23.input_layernorm.weight
ADDED
Binary file (6.14 kB). View file
|
|
model.model.layers.23.post_attention_layernorm.weight
ADDED
Binary file (6.14 kB). View file
|
|
model.model.layers.24.input_layernorm.weight
ADDED
Binary file (6.14 kB). View file
|
|
model.model.layers.24.post_attention_layernorm.weight
ADDED
Binary file (6.14 kB). View file
|
|
model.model.layers.25.input_layernorm.weight
ADDED
Binary file (6.14 kB). View file
|
|
model.model.layers.25.post_attention_layernorm.weight
ADDED
Binary file (6.14 kB). View file
|
|
model.model.layers.26.input_layernorm.weight
ADDED
Binary file (6.14 kB). View file
|
|
model.model.layers.26.post_attention_layernorm.weight
ADDED
Binary file (6.14 kB). View file
|
|
model.model.layers.27.input_layernorm.weight
ADDED
Binary file (6.14 kB). View file
|
|
model.model.layers.27.post_attention_layernorm.weight
ADDED
Binary file (6.14 kB). View file
|
|
model.model.layers.28.input_layernorm.weight
ADDED
Binary file (6.14 kB). View file
|
|