qaihm-bot commited on
Commit
f60d9c8
·
verified ·
1 Parent(s): cef876b

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +3 -4
README.md CHANGED
@@ -4,7 +4,6 @@ license: other
4
  tags:
5
  - llm
6
  - generative_ai
7
- - quantized
8
  - android
9
  pipeline_tag: text-generation
10
 
@@ -21,7 +20,7 @@ Please contact us to purchase this model. More details on model performance acro
21
 
22
  ### Model Details
23
 
24
- - **Model Type:** Text generation
25
  - **Model Stats:**
26
  - Input sequence length for Prompt Processor: 128
27
  - Context length: 4096
@@ -33,9 +32,9 @@ Please contact us to purchase this model. More details on model performance acro
33
  - TTFT: Time To First Token is the time it takes to generate the first response token. This is expressed as a range because it varies based on the length of the prompt. The lower bound is for a short prompt (up to 128 tokens, i.e., one iteration of the prompt processor) and the upper bound is for a prompt using the full context length (4096 tokens).
34
  - Response Rate: Rate of response generation after the first response token.
35
 
36
- | Model | Device | Chipset | Target Runtime | Response Rate (tokens per second) | Time To First Token (range, seconds)
37
  |---|---|---|---|---|---|
38
- | PLaMo-1B | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite | QNN | 68.21 | 0.031448 - 1.006336 | -- | -- |
39
 
40
  ## Deploying PLaMo-1B on-device
41
 
 
4
  tags:
5
  - llm
6
  - generative_ai
 
7
  - android
8
  pipeline_tag: text-generation
9
 
 
20
 
21
  ### Model Details
22
 
23
+ - **Model Type:** Model_use_case.text_generation
24
  - **Model Stats:**
25
  - Input sequence length for Prompt Processor: 128
26
  - Context length: 4096
 
32
  - TTFT: Time To First Token is the time it takes to generate the first response token. This is expressed as a range because it varies based on the length of the prompt. The lower bound is for a short prompt (up to 128 tokens, i.e., one iteration of the prompt processor) and the upper bound is for a prompt using the full context length (4096 tokens).
33
  - Response Rate: Rate of response generation after the first response token.
34
 
35
+ | Model | Precision | Device | Chipset | Target Runtime | Response Rate (tokens per second) | Time To First Token (range, seconds)
36
  |---|---|---|---|---|---|
37
+ | PLaMo-1B | w4a16 | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | QNN | 68.21 | 0.031448000000000004 - 1.0063360000000001 | -- | -- |
38
 
39
  ## Deploying PLaMo-1B on-device
40