qaihm-bot commited on
Commit
0593129
·
verified ·
1 Parent(s): 6e36f48

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +3 -5
README.md CHANGED
@@ -4,7 +4,6 @@ license: other
4
  tags:
5
  - llm
6
  - generative_ai
7
- - quantized
8
  - android
9
  pipeline_tag: text-generation
10
 
@@ -21,7 +20,7 @@ Allam 7B is SDAIA's first generation edge model, optimized for performance on Sn
21
 
22
  ### Model Details
23
 
24
- - **Model Type:** Text generation
25
  - **Model Stats:**
26
  - Input sequence length for Prompt Processor: 128
27
  - Max context length: 1024
@@ -34,9 +33,8 @@ Allam 7B is SDAIA's first generation edge model, optimized for performance on Sn
34
  - TTFT: Time To First Token is the time it takes to generate the first response token. This is expressed as a range because it varies based on the length of the prompt. The lower bound is for a short prompt (up to 128 tokens, i.e., one iteration of the prompt processor) and the upper bound is for a prompt using the full context length (4096 tokens).
35
  - Response Rate: Rate of response generation after the first response token.
36
 
37
- | Model | Device | Chipset | Target Runtime | Response Rate (tokens per second) | Time To First Token (range, seconds)
38
- |---|---|---|---|---|---|
39
- | ALLaM-7B-Quantized | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN | 9.5 | 0.23854499999999998 - 1.399168 | -- | -- |
40
 
41
  ## Deploy Allam 7B on Snapdragon X Elite NPU
42
 
 
4
  tags:
5
  - llm
6
  - generative_ai
 
7
  - android
8
  pipeline_tag: text-generation
9
 
 
20
 
21
  ### Model Details
22
 
23
+ - **Model Type:** Model_use_case.text_generation
24
  - **Model Stats:**
25
  - Input sequence length for Prompt Processor: 128
26
  - Max context length: 1024
 
33
  - TTFT: Time To First Token is the time it takes to generate the first response token. This is expressed as a range because it varies based on the length of the prompt. The lower bound is for a short prompt (up to 128 tokens, i.e., one iteration of the prompt processor) and the upper bound is for a prompt using the full context length (4096 tokens).
34
  - Response Rate: Rate of response generation after the first response token.
35
 
36
+ | Model | Precision | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Primary Compute Unit | Target Model
37
+ |---|---|---|---|---|---|---|---|---|
 
38
 
39
  ## Deploy Allam 7B on Snapdragon X Elite NPU
40