tjellm commited on
Commit
fcd6b9c
·
verified ·
1 Parent(s): 3d01ff3

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +67 -0
README.md ADDED
@@ -0,0 +1,67 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: llama3.1
3
+ language:
4
+ - en
5
+ base_model:
6
+ - meta-llama/Llama-3.1-8B-Instruct
7
+ pipeline_tag: text-generation
8
+ ---
9
+
10
+ # EmbeddedLLM/Llama-3.1-8B-Instruct-w_fp8_per_channel_sym
11
+ - ## Introduction
12
+ This model was created by applying [Quark](https://quark.docs.amd.com/latest/index.html) with calibration samples from Pile dataset.
13
+ - ## Quantization Stragegy
14
+ - ***Quantized Layers***: All linear layers excluding "lm_head"
15
+ - ***Weight***: FP8 symmetric per-channel
16
+ - ***Activation***: FP8 symmetric per-tensor
17
+ - ***KV Cache***: FP8 symmetric per-tensor
18
+ - ## Quick Start
19
+ 1. [Download and install Quark](https://quark.docs.amd.com/latest/install.html)
20
+ 2. Run the quantization script in the example folder using the following command line:
21
+ ```sh
22
+ export MODEL_DIR = [local model checkpoint folder] or meta-llama/Meta-Llama-3.1-8B-Instruct
23
+ # single GPU
24
+ HIP_VISIBLE_DEVICES=0 python quantize_quark.py --model_dir $MODEL_DIR \
25
+ --output_dir /app/model/quark/Llama-3.1-8B-Instruct-w_fp8_per_channel_sym/ \
26
+ --quant_scheme w_fp8_per_channel_sym \
27
+ --kv_cache_dtype fp8 \
28
+ --num_calib_data 128 \
29
+ --model_export quark_safetensors
30
+
31
+ # If model size is too large for single GPU, please use multi GPU instead.
32
+ python quantize_quark.py --model_dir $MODEL_DIR \
33
+ --output_dir /app/model/quark/Llama-3.1-8B-Instruct-w_fp8_per_channel_sym/ \
34
+ --quant_scheme w_fp8_per_channel_sym \
35
+ --kv_cache_dtype fp8 \
36
+ --num_calib_data 128 \
37
+ --multi_gpu \
38
+ --model_export quark_safetensors
39
+ ```
40
+
41
+ ## Deployment
42
+ Quark has its own export format and allows FP8 quantized models to be efficiently deployed using the vLLM backend(vLLM-compatible).
43
+
44
+ ## Evaluation
45
+ Quark currently uses perplexity(PPL) as the evaluation metric for accuracy loss before and after quantization.The specific PPL algorithm can be referenced in the quantize_quark.py.
46
+ The quantization evaluation results are conducted in pseudo-quantization mode, which may slightly differ from the actual quantized inference accuracy. These results are provided for reference only.
47
+ #### Evaluation scores
48
+ <table>
49
+ <tr>
50
+ <td><strong>Benchmark</strong>
51
+ </td>
52
+ <td><strong>Meta-Llama-3.1-8B-Instruct </strong>
53
+ </td>
54
+ <td><strong>EmbeddedLLM/Llama-3.1-8B-Instruct-w_fp8_per_channel_sym(this model)</strong>
55
+ </td>
56
+ </tr>
57
+ <tr>
58
+ <td>Perplexity-wikitext2
59
+ </td>
60
+ <td>7.2169
61
+ </td>
62
+ <td>7.34375
63
+ </td>
64
+ </tr>
65
+
66
+ </table>
67
+