File size: 7,304 Bytes
092dc81
 
 
 
 
 
 
 
 
 
 
 
 
 
124e3dc
 
092dc81
 
 
 
 
 
 
 
626005f
092dc81
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
f0d3bd8
 
 
092dc81
f0d3bd8
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
092dc81
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2bc30fe
092dc81
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
---
datasets:
- NeelNanda/pile-10k
base_model:
- moonshotai/Kimi-K2-Instruct
---

## Model Details

This model is an mixed int4 model with group_size 128 and symmetric quantization of [moonshotai/Kimi-K2-Instruct](https://huggingface.co/moonshotai/Kimi-K2-Instruct) generated by [intel/auto-round](https://github.com/intel/auto-round) algorithm. Non expert layers are fallback to 8 bits. Please refer to Section Generate the model for more details.
Please follow the license of the original model.

## How To Use

**Due to kernel issue, this model could only run on CPU**

### INT4 Inference(CPU)
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import transformers
from auto_round import AutoRound, AutoRoundConfig

import torch

quantized_model_dir = "Intel/Kimi-K2-Instruct-int4-mixed-AutoRound-cpu"

model = AutoModelForCausalLM.from_pretrained(
    quantized_model_dir,
    torch_dtype=torch.bfloat16,
    device_map="cpu",
)
tokenizer = AutoTokenizer.from_pretrained(quantized_model_dir, trust_remote_code=True)
prompts = [
    "9.11和9.8哪个数字大",
    "strawberry中有几个r?",
    "There is a girl who likes adventure,",
    "Please give a brief introduction of Moonshot AI",
]

texts=[]
for prompt in prompts:
    messages = [
        {"role": "system", "content": "You are Kimi, an AI assistant created by Moonshot AI."},
        {"role": "user", "content": [{"type": "text", "text":prompt}]}
    ]
    text = tokenizer.apply_chat_template(
        messages,
        tokenize=False,
        add_generation_prompt=True
    )
    texts.append(text)
inputs = tokenizer(texts, return_tensors="pt", padding=True, truncation=True)

outputs = model.generate(
    input_ids=inputs["input_ids"].to(model.device),
    attention_mask=inputs["attention_mask"].to(model.device),
    max_length=200, ##change this to align with the official usage
    num_return_sequences=1,
    do_sample=False  ##change this to align with the official usage
)
generated_ids = [
    output_ids[len(input_ids):] for input_ids, output_ids in zip(inputs["input_ids"], outputs)
]

decoded_outputs = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)

for i, prompt in enumerate(prompts):
    input_id = inputs
    print(f"Prompt: {prompt}")
    print(f"Generated: {decoded_outputs[i]}")
    print("-" * 50)
"""
Prompt: 9.11和9.8哪个数字大
Generated: ### 第一步:理解题目

首先,我需要明确题目在问什么。题目给出了两个数字:9.11和9.8,问哪一个更大。这看起来是一个简单的数值比较问题。

### 第二步:数字的表示

这两个数字都是小数,即带有小数部分的数字。小数由整数部分和小数部分组成,小数点左边是整数部分,右边是小数部分。

- 9.11:整数部分是9,小数部分是11。
- 9.8:整数部分是9,小数部分是8。

### 第三步:比较整数部分

首先比较两个数的整数部分:

- 9.11的整数部分是9。
- 9.8的整数部分也是9。

整数部分相同,因此需要比较小数部分。

### 第四步:比较小数部分

小数部分的比较
--------------------------------------------------
Prompt: strawberry中有几个r?
Generated: ### 问题重述
我们需要计算单词 "strawberry" 中有多少个字母 "r"。

### 步骤分解
1. **写出单词**:首先,将单词 "strawberry" 完整地写出来。
2. **逐个字母检查**:从左到右,逐个字母查看是否是 "r"(注意大小写,但这里都是小写)。
3. **计数**:每遇到一个 "r",就增加计数器。

### 详细检查
让我们将 "strawberry" 拆分开来:

字母位置及字母:
1. s
2. t
3. r
4. a
5. w
6. b
7. e
8. r
9. r
10. y

现在,我们检查每个字母是否为 "r":

-
--------------------------------------------------
Prompt: There is a girl who likes adventure,
Generated: There is a girl who likes adventure,
so she ties her shoes with sunrise instead of laces,
lets the wind pick the next city,
and trades her shadow for a passport stamp.

She keeps her memories in mason jars—
one holds the scent of monsoon in Mumbai,
another the hush of Icelandic snow.
When homesick, she unscrews a lid,
inhales, and is gone again.

She once outran her own name
somewhere between Marrakesh and the moon,
answering only to “Hey, you with the constellations in your hair.”
Maps are her love letters;
she folds them into paper boats
and sails them down hotel bathtubs,
whispering, *Find me where the water ends.*
--------------------------------------------------
Prompt: Please give a brief introduction of Moonshot AI
Generated: Moonshot AI is a Chinese artificial-intelligence company founded in 2023 and headquartered in Beijing. Focused on large-scale language models and related products, it released its first model, Kimi, in October 2023 and has since launched upgraded versions such as Kimi 1.5. The company closed a US$1 billion funding round in early 2024 that valued it at about US$2.5 billion, making it one of China’s best-funded AI start-ups.
--------------------------------------------------

"""
```

### Generate the model
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
import transformers

model_name = "Kimi-K2-Instruct-BF16"

tokenizer = AutoTokenizer.from_pretrained(model_name,trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(model_name,device_map="cpu", torch_dtype="auto",trust_remote_code=True)

layer_config = {}
for n, m in model.named_modules():
    if isinstance(m, torch.nn.Linear):
        if "expert" in n or "shared_experts" in n:
            layer_config[n] = {"bits": 4}
            print(n, 4)
        else:
            layer_config[n] = {"bits": 8}
            print(n, 8)

from auto_round import AutoRound

autoround = AutoRound(model, tokenizer, iters=0, layer_config=layer_config)
autoround.quantize_and_save(format="auto_round", output_dir="tmp_autoround")

```


## Ethical Considerations and Limitations

The model can produce factually incorrect output, and should not be relied on to produce factually accurate information. Because of the limitations of the pretrained model and the finetuning datasets, it is possible that this model could generate lewd, biased or otherwise offensive outputs.

Therefore, before deploying any applications of the model, developers should perform safety testing.

## Caveats and Recommendations

Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.

Here are a couple of useful links to learn more about Intel's AI software:

- Intel Neural Compressor [link](https://github.com/intel/neural-compressor)

## Disclaimer

The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please consult an attorney before using this model for commercial purposes.

## Cite

@article{cheng2023optimize, title={Optimize weight rounding via signed gradient descent for the quantization of llms}, author={Cheng, Wenhua and Zhang, Weiwei and Shen, Haihao and Cai, Yiyang and He, Xin and Lv, Kaokao and Liu, Yi}, journal={arXiv preprint arXiv:2309.05516}, year={2023} }

[arxiv](https://arxiv.org/abs/2309.05516) [github](https://github.com/intel/auto-round)