Improve language tag
Browse filesHi! As the model is multilingual, this is a PR to add other languages than English to the language tag to improve the referencing. Note that 29 languages are announced in the README, but only 13 are explicitly listed. I was therefore only able to add these 13 languages.
README.md
CHANGED
@@ -1,55 +1,66 @@
|
|
1 |
-
---
|
2 |
-
language:
|
3 |
-
-
|
4 |
-
-
|
5 |
-
|
6 |
-
-
|
7 |
-
|
8 |
-
|
9 |
-
|
10 |
-
-
|
11 |
-
-
|
12 |
-
-
|
13 |
-
|
14 |
-
|
15 |
-
|
16 |
-
|
17 |
-
|
18 |
-
|
19 |
-
|
20 |
-
|
21 |
-
|
22 |
-
|
23 |
-
|
24 |
-
|
25 |
-
|
26 |
-
|
27 |
-
|
28 |
-
|
29 |
-
|
30 |
-
|
31 |
-
|
32 |
-
|
33 |
-
|
34 |
-
|
35 |
-
|
36 |
-
|
37 |
-
|
38 |
-
|
39 |
-
|
40 |
-
|
41 |
-
|
42 |
-
|
43 |
-
|
44 |
-
|
45 |
-
|
46 |
-
|
47 |
-
|
48 |
-
|
49 |
-
|
50 |
-
|
51 |
-
|
52 |
-
|
53 |
-
|
54 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
55 |
```
|
|
|
1 |
+
---
|
2 |
+
language:
|
3 |
+
- zho
|
4 |
+
- eng
|
5 |
+
- fra
|
6 |
+
- spa
|
7 |
+
- por
|
8 |
+
- deu
|
9 |
+
- ita
|
10 |
+
- rus
|
11 |
+
- jpn
|
12 |
+
- kor
|
13 |
+
- vie
|
14 |
+
- tha
|
15 |
+
- ara
|
16 |
+
base_model:
|
17 |
+
- Qwen/Qwen2.5-0.5B-Instruct
|
18 |
+
pipeline_tag: text-generation
|
19 |
+
license: apache-2.0
|
20 |
+
datasets:
|
21 |
+
- BAAI/IndustryCorpus2
|
22 |
+
- BAAI/Infinity-Instruct
|
23 |
+
- BAAI/Infinity-Preference
|
24 |
+
---
|
25 |
+
|
26 |
+
# mini_qwen
|
27 |
+
|
28 |
+
## Introduction
|
29 |
+
mini_qwen是一个从头开始训练的1B参数的大型语言模型(LLM)项目,包括预训练(PT)、微调(SFT)和直接偏好优化(DPO)3个部分。其中预训练和微调仅需要12G显存即可训练,直接偏好优化仅需要14G显存即可训练,这意味着使用T4显卡就可以开始你的训练之旅。
|
30 |
+
|
31 |
+
mini_qwen是以Qwen2.5-0.5B-Instruct模型为基础,通过扩充模型隐藏状态层数、隐藏状态维度和注意力头数,增加参数量到1B,并进行参数随机初始化。训练数据使用北京智源人工智能研究院的预训练(16B token)、微调(9M 条)和偏好数据(60K 条),使用flash_attention_2进行加速,使用deepspeed在6张H800上训练25h(pt 1epoch)、43h(sft 3epoch)、1h(dpo 3epoch)。
|
32 |
+
|
33 |
+
这是一次非常有趣且有价值的尝试,在整个过程中,本项目探究了尺度定律(scaling law)、复读机现象与微调阶段的知识注入,也解决了很多bug。本项目将尽可能详细地介绍整个训练过程,也欢迎交流讨论。
|
34 |
+
|
35 |
+
更多内容详见:https://github.com/qiufengqijun/mini_qwen
|
36 |
+
|
37 |
+
## Quickstart
|
38 |
+
使用方法如下:
|
39 |
+
```
|
40 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
41 |
+
import logging
|
42 |
+
|
43 |
+
logging.getLogger("transformers").setLevel(logging.ERROR) # 忽略警告
|
44 |
+
|
45 |
+
# 加载分词器与模型
|
46 |
+
model_path = "/path/to/your/model"
|
47 |
+
model = AutoModelForCausalLM.from_pretrained(model_path)
|
48 |
+
tokenizer = AutoTokenizer.from_pretrained(model_path)
|
49 |
+
|
50 |
+
|
51 |
+
while True:
|
52 |
+
prompt = input("用户:")
|
53 |
+
|
54 |
+
text = prompt # 预训练模型
|
55 |
+
text = f"<|im_start|>user\n{prompt}<|im_end|>\n<|im_start|>assistant\n" # 微调和直接偏好优化模型
|
56 |
+
|
57 |
+
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
|
58 |
+
generated_ids = model.generate(**model_inputs, max_new_tokens=512)
|
59 |
+
generated_ids = [
|
60 |
+
output_ids[len(input_ids) :]
|
61 |
+
for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
|
62 |
+
]
|
63 |
+
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
|
64 |
+
|
65 |
+
print("助手:", response)
|
66 |
```
|