Update README.md
Browse files
README.md
CHANGED
@@ -2,6 +2,16 @@
|
|
2 |
license: apache-2.0
|
3 |
base_model:
|
4 |
- Qwen/Qwen2.5-Math-7B
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
5 |
---
|
6 |
|
7 |
# π€ Model Card: InfiX-ai/InfiAlign-Qwen-7B-DPO
|
@@ -329,4 +339,4 @@ print(response)
|
|
329 |
## π News
|
330 |
|
331 |
* β
We released model checkpoint for `InfiAlign-Qwen-7B-DPO` !
|
332 |
-
* β
We released [InfiAlign-Qwen-7B-DPO-Eval-Response](https://huggingface.co/datasets/InfiX-ai/InfiAlign-Qwen-7B-DPO-Eval-Response) ! This dataset contains the detailed evaluation responses generated by our DPO model across various benchmarks.
|
|
|
2 |
license: apache-2.0
|
3 |
base_model:
|
4 |
- Qwen/Qwen2.5-Math-7B
|
5 |
+
language:
|
6 |
+
- en
|
7 |
+
pipeline_tag: text-generation
|
8 |
+
library_name: transformers
|
9 |
+
tags:
|
10 |
+
- large-language-models
|
11 |
+
- DPO
|
12 |
+
- direct-preference-optimization
|
13 |
+
- reasoning
|
14 |
+
- long-CoT
|
15 |
---
|
16 |
|
17 |
# π€ Model Card: InfiX-ai/InfiAlign-Qwen-7B-DPO
|
|
|
339 |
## π News
|
340 |
|
341 |
* β
We released model checkpoint for `InfiAlign-Qwen-7B-DPO` !
|
342 |
+
* β
We released [InfiAlign-Qwen-7B-DPO-Eval-Response](https://huggingface.co/datasets/InfiX-ai/InfiAlign-Qwen-7B-DPO-Eval-Response) ! This dataset contains the detailed evaluation responses generated by our DPO model across various benchmarks.
|