No changes needed
Browse filesThis PR just ensures the model card is properly checked.
README.md
CHANGED
@@ -1,20 +1,21 @@
|
|
1 |
---
|
2 |
base_model: LGAI-EXAONE/EXAONE-Deep-7.8B
|
3 |
-
base_model_relation: quantized
|
4 |
-
license: other
|
5 |
-
license_name: exaone
|
6 |
-
license_link: LICENSE
|
7 |
language:
|
8 |
- en
|
9 |
- ko
|
|
|
|
|
|
|
|
|
|
|
10 |
tags:
|
11 |
- lg-ai
|
12 |
- exaone
|
13 |
- exaone-deep
|
14 |
-
|
15 |
-
library_name: transformers
|
16 |
---
|
17 |
|
|
|
18 |
<p align="center">
|
19 |
<img src="assets/EXAONE_Symbol+BI_3d.png", width="300", style="margin: 40 auto;">
|
20 |
<br>
|
@@ -145,8 +146,11 @@ We provide the pre-quantized EXAONE Deep models with **AWQ** and several quantiz
|
|
145 |
|
146 |
To achieve the expected performance, we recommend using the following configurations:
|
147 |
|
148 |
-
1. Ensure the model starts with `<thought
|
149 |
-
|
|
|
|
|
|
|
150 |
3. Avoid using system prompt, and build the instruction on the user prompt.
|
151 |
4. Additional instructions help the models reason more deeply, so that the models generate better output.
|
152 |
- For math problems, the instructions **"Please reason step by step, and put your final answer within \boxed{}."** are helpful.
|
@@ -184,4 +188,5 @@ The model is licensed under [EXAONE AI Model License Agreement 1.1 - NC](./LICEN
|
|
184 |
```
|
185 |
|
186 |
## Contact
|
187 |
-
LG AI Research Technical Support: [email protected]
|
|
|
|
1 |
---
|
2 |
base_model: LGAI-EXAONE/EXAONE-Deep-7.8B
|
|
|
|
|
|
|
|
|
3 |
language:
|
4 |
- en
|
5 |
- ko
|
6 |
+
library_name: transformers
|
7 |
+
license: other
|
8 |
+
license_name: exaone
|
9 |
+
license_link: LICENSE
|
10 |
+
pipeline_tag: text-generation
|
11 |
tags:
|
12 |
- lg-ai
|
13 |
- exaone
|
14 |
- exaone-deep
|
15 |
+
base_model_relation: quantized
|
|
|
16 |
---
|
17 |
|
18 |
+
```markdown
|
19 |
<p align="center">
|
20 |
<img src="assets/EXAONE_Symbol+BI_3d.png", width="300", style="margin: 40 auto;">
|
21 |
<br>
|
|
|
146 |
|
147 |
To achieve the expected performance, we recommend using the following configurations:
|
148 |
|
149 |
+
1. Ensure the model starts with `<thought>
|
150 |
+
` for reasoning steps. The model's output quality may be degraded when you omit it. You can easily apply this feature by using `tokenizer.apply_chat_template()` with `add_generation_prompt=True`. Please check the example code on [Quickstart](#quickstart) section.
|
151 |
+
2. The reasoning steps of EXAONE Deep models enclosed by `<thought>
|
152 |
+
...
|
153 |
+
</thought>` usually have lots of tokens, so previous reasoning steps may be necessary to be removed in multi-turn situation. The provided tokenizer handles this automatically.
|
154 |
3. Avoid using system prompt, and build the instruction on the user prompt.
|
155 |
4. Additional instructions help the models reason more deeply, so that the models generate better output.
|
156 |
- For math problems, the instructions **"Please reason step by step, and put your final answer within \boxed{}."** are helpful.
|
|
|
188 |
```
|
189 |
|
190 |
## Contact
|
191 |
+
LG AI Research Technical Support: [email protected]
|
192 |
+
```
|