qaihm-bot commited on
Commit
937c7ca
·
verified ·
1 Parent(s): 221199f

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +3 -4
README.md CHANGED
@@ -4,7 +4,6 @@ license: other
4
  tags:
5
  - llm
6
  - generative_ai
7
- - quantized
8
  - android
9
  pipeline_tag: text-generation
10
 
@@ -25,7 +24,7 @@ This model is an implementation of JAIS-6p7b-Chat found [here](https://huggingfa
25
 
26
  ### Model Details
27
 
28
- - **Model Type:** Text generation
29
  - **Model Stats:**
30
  - Input sequence length for Prompt Processor: 128
31
  - Max context length: 2048
@@ -37,9 +36,9 @@ This model is an implementation of JAIS-6p7b-Chat found [here](https://huggingfa
37
  - TTFT: Time To First Token is the time it takes to generate the first response token. This is expressed as a range because it varies based on the length of the prompt. The lower bound is for a short prompt (up to 128 tokens, i.e., one iteration of the prompt processor) and the upper bound is for a prompt using the full context length (2048 tokens).
38
  - Response Rate: Rate of response generation after the first response token.
39
 
40
- | Model | Device | Chipset | Target Runtime | Response Rate (tokens per second) | Time To First Token (range, seconds)
41
  |---|---|---|---|---|---|
42
- | Jais-6p7b-Chat | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite | QNN | 13.33 | 0.238231 - 3.811696 | -- | -- |
43
 
44
  ## Deploying JAIS-6p7b-Chat on-device
45
 
 
4
  tags:
5
  - llm
6
  - generative_ai
 
7
  - android
8
  pipeline_tag: text-generation
9
 
 
24
 
25
  ### Model Details
26
 
27
+ - **Model Type:** Model_use_case.text_generation
28
  - **Model Stats:**
29
  - Input sequence length for Prompt Processor: 128
30
  - Max context length: 2048
 
36
  - TTFT: Time To First Token is the time it takes to generate the first response token. This is expressed as a range because it varies based on the length of the prompt. The lower bound is for a short prompt (up to 128 tokens, i.e., one iteration of the prompt processor) and the upper bound is for a prompt using the full context length (2048 tokens).
37
  - Response Rate: Rate of response generation after the first response token.
38
 
39
+ | Model | Precision | Device | Chipset | Target Runtime | Response Rate (tokens per second) | Time To First Token (range, seconds)
40
  |---|---|---|---|---|---|
41
+ | Jais-6p7b-Chat | w4a16 | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite Mobile | QNN | 13.33 | 0.238231 - 3.811696 | -- | -- |
42
 
43
  ## Deploying JAIS-6p7b-Chat on-device
44