Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
@@ -1,10 +1,9 @@
|
|
1 |
---
|
2 |
library_name: pytorch
|
3 |
-
license:
|
4 |
tags:
|
5 |
- llm
|
6 |
- generative_ai
|
7 |
-
- quantized
|
8 |
- android
|
9 |
pipeline_tag: text-generation
|
10 |
|
@@ -25,7 +24,7 @@ This model is an implementation of Phi-3.5-mini-instruct found [here](https://hu
|
|
25 |
|
26 |
### Model Details
|
27 |
|
28 |
-
- **Model Type:**
|
29 |
- **Model Stats:**
|
30 |
- Input sequence length for Prompt Processor: 128
|
31 |
- Context length: 4096
|
@@ -41,11 +40,11 @@ This model is an implementation of Phi-3.5-mini-instruct found [here](https://hu
|
|
41 |
- TTFT: Time To First Token is the time it takes to generate the first response token. This is expressed as a range because it varies based on the length of the prompt. The lower bound is for a short prompt (up to 128 tokens, i.e., one iteration of the prompt processor) and the upper bound is for a prompt using the full context length (4096 tokens).
|
42 |
- Response Rate: Rate of response generation after the first response token.
|
43 |
|
44 |
-
| Model | Device | Chipset | Target Runtime | Response Rate (tokens per second) | Time To First Token (range, seconds)
|
45 |
|---|---|---|---|---|---|
|
46 |
-
| Phi-3.5-
|
47 |
-
| Phi-3.5-
|
48 |
-
| Phi-3.5-
|
49 |
|
50 |
## Deploying Phi-3.5-mini-instruct on-device
|
51 |
|
|
|
1 |
---
|
2 |
library_name: pytorch
|
3 |
+
license: other
|
4 |
tags:
|
5 |
- llm
|
6 |
- generative_ai
|
|
|
7 |
- android
|
8 |
pipeline_tag: text-generation
|
9 |
|
|
|
24 |
|
25 |
### Model Details
|
26 |
|
27 |
+
- **Model Type:** Model_use_case.text_generation
|
28 |
- **Model Stats:**
|
29 |
- Input sequence length for Prompt Processor: 128
|
30 |
- Context length: 4096
|
|
|
40 |
- TTFT: Time To First Token is the time it takes to generate the first response token. This is expressed as a range because it varies based on the length of the prompt. The lower bound is for a short prompt (up to 128 tokens, i.e., one iteration of the prompt processor) and the upper bound is for a prompt using the full context length (4096 tokens).
|
41 |
- Response Rate: Rate of response generation after the first response token.
|
42 |
|
43 |
+
| Model | Precision | Device | Chipset | Target Runtime | Response Rate (tokens per second) | Time To First Token (range, seconds)
|
44 |
|---|---|---|---|---|---|
|
45 |
+
| Phi-3.5-Mini-Instruct | w4a16 | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | QNN | 13.01 | 0.1469056 - 4.7009792 | -- | Use Export Script |
|
46 |
+
| Phi-3.5-Mini-Instruct | w4a16 | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN | 6.2 | 0.185833 - 5.946656 | -- | Use Export Script |
|
47 |
+
| Phi-3.5-Mini-Instruct | w4a16 | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | QNN | 14.73 | 0.1195948 - 3.8270336 | -- | Use Export Script |
|
48 |
|
49 |
## Deploying Phi-3.5-mini-instruct on-device
|
50 |
|