ericsorides commited on
Commit
35e8a92
·
verified ·
1 Parent(s): 2985531

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +159 -0
README.md ADDED
@@ -0,0 +1,159 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: llama3.2
3
+ language:
4
+ - en
5
+ - de
6
+ - fr
7
+ - it
8
+ - pt
9
+ - hi
10
+ - es
11
+ - th
12
+ tags:
13
+ - text-generation-inference
14
+ - llama
15
+ - llama3
16
+ - facebook
17
+ - meta
18
+ pipeline_tag: text-generation
19
+ base_model:
20
+ - meta-llama/Llama-3.2-3B-Instruct
21
+ ---
22
+
23
+
24
+ # Llama 3.2 3B Instruct with Key-Value-Cache enabled in ONNX AWQ (4-bit) format
25
+ - Model creator: [Meta-Llama](https://huggingface.co/meta-llama)
26
+ - Original model: [Meta-Llama Llama 3.2 3B Instruct](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct)
27
+
28
+ <!-- description start -->
29
+ ## Description
30
+
31
+ This repo contains the ONNX files for the ONNX conversion of Llama 3.2 3B Instruct done by Esperanto Technologies.
32
+ The model is in the 4-bit format quantized with AWQ and has the KVC enabled.
33
+
34
+ ### About AWQ
35
+
36
+ AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.
37
+ More here: [AutoAWQ](https://github.com/casper-hansen/AutoAWQ)
38
+
39
+ <!-- description end -->
40
+
41
+ ## How to download ONNX model and weight files
42
+
43
+ The easiest way to obtain the model is to clone this whole repo.
44
+ Alternatively you can download the files is using the `huggingface-hub` Python library.
45
+
46
+ ```shell
47
+ pip3 install huggingface-hub>=0.17.1
48
+ ```
49
+
50
+ Then you can download any individual model file to the current directory, at high speed, with a command like this:
51
+
52
+ ```shell
53
+ huggingface-cli download Esperanto/llama-3.2-3B-Instruct-kvc-AWQ-int4-onnx --local-dir llama-3.2-3B-Instruct-kvc-AWQ-int4-onnx --local-dir-use-symlinks False
54
+ ```
55
+
56
+ For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
57
+
58
+ ## How to run from Python code using ONNXRuntime
59
+
60
+ This model can easily be ran in a CPU using [ONNXRuntime](https://onnxruntime.ai/).
61
+
62
+ #### First install the packages
63
+
64
+ ```bash
65
+ pip3 install onnx==1.16.1
66
+ pip3 install onnxruntime==1.17.1
67
+ ```
68
+
69
+ #### Example code: generate text with this model
70
+
71
+ We define the loop with greedy decoding:
72
+ ```python
73
+ import numpy as np
74
+ import onnxruntime
75
+ import onnx
76
+ from transformers import AutoTokenizer
77
+
78
+ def generate_text(model_path, prompt, tokenizer, max_gen_tokens, total_sequence, window, context):
79
+ model = onnx.load(model_path)
80
+
81
+ #we create the inputs for the first iteration
82
+ input_tensor = tokenizer(prompt, return_tensors="pt")
83
+ prompt_size = len(input_tensor['input_ids'][0])
84
+ actual_input = input_tensor['input_ids']
85
+ if prompt_size < window:
86
+ actual_input = np.concatenate((tokenizer.bos_token_id*np.ones([1, window - prompt_size], dtype = 'int64'),
87
+ actual_input), axis=1)
88
+ if prompt_size + max_gen_tokens > total_sequence:
89
+ print("ERROR: Longer total sequence is needed!")
90
+ return
91
+ first_attention = np.concatenate((np.zeros([1, total_sequence - window], dtype = 'int64'),
92
+ np.ones((1, window), dtype = 'int64')), axis=1)
93
+ max_gen_tokens += prompt_size #we need to generate on top of parsing the prompt
94
+ inputs_names =[node.name for node in model.graph.input]
95
+ output_names =[node.name for node in model.graph.output]
96
+ n_heads = 8 #gqa-heads of the kvc
97
+ inputs_dict = {}
98
+ inputs_dict['input_ids'] = actual_input[:, :window].reshape(1, window).numpy()
99
+ inputs_dict['attention_mask'] = first_attention
100
+ for name in inputs_names:
101
+ if name == 'input_ids' or name == 'attention_mask': continue
102
+ inputs_dict[name] = np.zeros([1, n_heads, context-window, 128], dtype="float16")
103
+ index = 0
104
+ new_token = np.array([10])
105
+ next_index = window
106
+ old_j = 0
107
+ total_input = actual_input.numpy()
108
+
109
+ rt_session = onnxruntime.InferenceSession(model_path)
110
+ ## We run the inferences
111
+ while next_index < max_gen_tokens:
112
+ if new_token.any() == tokenizer.eos_token_id:
113
+ break
114
+ #inference
115
+ output = rt_session.run(output_names, inputs_dict)
116
+ outs_dictionary = {name: content for (name, content) in zip (output_names, output)}
117
+ #we prepare the inputs for the next inference
118
+ for name in inputs_names:
119
+ if name == 'input_ids':
120
+ old_j = next_index
121
+ if next_index < prompt_size:
122
+ if prompt_size - next_index >= window: next_index += window
123
+ else: next_index = prompt_size
124
+ j = next_index - window
125
+ else:
126
+ next_index +=1
127
+ j = next_index - window
128
+ new_token = outs_dictionary['logits'].argmax(-1).reshape(1, window)
129
+ total_input = np.concatenate((total_input, new_token[: , -1:]), axis = 1)
130
+ inputs_dict['input_ids']= total_input[:, j:next_index].reshape(1, window)
131
+ elif name == 'attention_mask':
132
+ inputs_dict['attention_mask'] = np.concatenate((np.zeros((1, total_sequence-next_index), dtype = 'int64'), np.ones((1, next_index), dtype = 'int64')), axis=1)
133
+ else:
134
+ old_name = name.replace("past_key_values", "present")
135
+ inputs_dict[name] = outs_dictionary[old_name][:, :, next_index-old_j:context-window+(next_index - old_j), :]
136
+
137
+ answer = tokenizer.decode(total_input[0], skip_special_tokens=True, clean_up_tokenization_spaces=False)
138
+ return answer
139
+ ```
140
+ We now run the inferences:
141
+
142
+ ```python
143
+ tokenizer = AutoTokenizer.from_pretrained("Esperanto/llama-3.2-3B-Instruct-kvc-AWQ-int4-onnx")
144
+ model_path = "llama-3.2-3B-Instruct-kvc-AWQ-int4-onnx/model.onnx"
145
+
146
+ max_gen_tokens = 20 #number of tokens we want tog eneral
147
+ total_sequence = 128 #total sequence_length
148
+ context = 1024 #the context to extend the kvc
149
+ window = 16 #number of tokens we want to parse at the time
150
+ messages = [
151
+ {"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
152
+ {"role": "user", "content": "Who are you?"},
153
+ ]
154
+
155
+ prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
156
+
157
+ generated = generate_text(model_path, prompt, tokenizer, max_gen_tokens, total_sequence, window, context)
158
+ print(generated)
159
+ ```