add measurement.json
Browse files- README.md +114 -0
- measurement.json +0 -0
README.md
ADDED
@@ -0,0 +1,114 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
library_name: transformers
|
3 |
+
license: apache-2.0
|
4 |
+
base_model: open-thoughts/OpenThinker-32B
|
5 |
+
tags:
|
6 |
+
- llama-factory
|
7 |
+
- full
|
8 |
+
- generated_from_trainer
|
9 |
+
model-index:
|
10 |
+
- name: OpenThinker-32B
|
11 |
+
results: []
|
12 |
+
datasets:
|
13 |
+
- open-thoughts/open-thoughts-114k
|
14 |
+
---
|
15 |
+
|
16 |
+
<p align="center">
|
17 |
+
<img src="https://huggingface.co/datasets/open-thoughts/open-thoughts-114k/resolve/main/open_thoughts.png" width="50%">
|
18 |
+
</p>
|
19 |
+
|
20 |
+
# OpenThinker-32B
|
21 |
+
|
22 |
+
This model is a fine-tuned version of [Qwen/Qwen2.5-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct) on the
|
23 |
+
[OpenThoughts-114k](https://huggingface.co/datasets/open-thoughts/OpenThoughts-114k) dataset.
|
24 |
+
|
25 |
+
The dataset is derived by distilling DeepSeek-R1 using the [data pipeline available on github](https://github.com/open-thoughts/open-thoughts).
|
26 |
+
More info about the dataset can be found on the dataset card at [OpenThoughts-114k dataset](https://huggingface.co/datasets/open-thoughts/open-thoughts-114k).
|
27 |
+
|
28 |
+
The numbers reported in the table below are evaluated with our open-source tool [Evalchemy](https://github.com/mlfoundations/Evalchemy).
|
29 |
+
|
30 |
+
|
31 |
+
|Model Name|Dataset Size|AIME24 I/II|AIME25 I|MATH500|GPQA Diamond|LCBv2|
|
32 |
+
|---|---|---|---|---|---|---|
|
33 |
+
|LIMO-32B|0.8k|56.7|49.3|86.6|58.1|60.0|
|
34 |
+
|s1-32B|1k|36.0|25.3|84.8|50.5|40.9|
|
35 |
+
|s1.1-32B|1k|64.7|49.3|89.0|60.1|65.5|
|
36 |
+
|DeepSeek-R1-Distill-Qwen-32B|800k (closed)|**76.7**|**55.9**|89.4|57.6|**71.2**|
|
37 |
+
|**OpenThinker-32B**|114k|66.0|53.3|**90.6**|**61.6**|68.9|
|
38 |
+
|
39 |
+
|
40 |
+
We are fully open-source. Our [model weights](https://huggingface.co/open-thoughts), [datasets](https://huggingface.co/open-thoughts), [data generation code](https://github.com/open-thoughts/open-thoughts), [evaluation code](https://github.com/mlfoundations/Evalchemy), and [training code](https://github.com/hiyouga/LLaMA-Factory) are all publicly available.
|
41 |
+
|
42 |
+
| | Open Weights | Open Data | Open Code |
|
43 |
+
|--|--------------|-----------| --------- |
|
44 |
+
|OpenThinker-32B|β
|[β
](https://huggingface.co/datasets/open-thoughts/OpenThoughts-114k)|[β
](https://github.com/open-thoughts/open-thoughts) |
|
45 |
+
|DeepSeek-R1-Distill-Qwen-32B|β
|β|β|
|
46 |
+
|OpenAI/Gemini|β|β|β|β|
|
47 |
+
|
48 |
+
|
49 |
+
|
50 |
+
## Intended uses & limitations
|
51 |
+
|
52 |
+
Apache 2.0 License
|
53 |
+
|
54 |
+
|
55 |
+
## Training procedure
|
56 |
+
|
57 |
+
We finetune [Qwen2.5-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct)
|
58 |
+
on [OpenThoughts-114k](https://huggingface.co/datasets/open-thoughts/OpenThoughts-114k) for
|
59 |
+
3 epochs with a 16k context length using [LlamaFactory](https://github.com/hiyouga/LLaMA-Factory).
|
60 |
+
Our [full training configuration](https://github.com/open-thoughts/open-thoughts/blob/main/train/OpenThinker-32B.yaml)
|
61 |
+
is provided in [our repository](https://github.com/open-thoughts/open-thoughts/tree/main).
|
62 |
+
Training the 32B model on [OpenThoughts-114k](https://huggingface.co/datasets/open-thoughts/OpenThoughts-114k)
|
63 |
+
was done on AWS SageMaker with 8xH100 P5 nodes. On 4 nodes, this took around 90 hours.
|
64 |
+
Meanwhile, for training on [OpenThoughts-Unverified-173k](https://huggingface.co/datasets/open-thoughts/OpenThoughts-Unverfied-173k),
|
65 |
+
we used 96 nodes of 4xA100 (64 GB per GPU), training took 30 hours, spending 11,520 A100 hours on the Leonardo Supercomputer.
|
66 |
+
|
67 |
+
### Training hyperparameters
|
68 |
+
|
69 |
+
The following hyperparameters were used during training:
|
70 |
+
- learning_rate: 1e-05
|
71 |
+
- train_batch_size: 1
|
72 |
+
- eval_batch_size: 8
|
73 |
+
- seed: 42
|
74 |
+
- distributed_type: multi-GPU
|
75 |
+
- num_devices: 32
|
76 |
+
- gradient_accumulation_steps: 3
|
77 |
+
- total_train_batch_size: 96
|
78 |
+
- total_eval_batch_size: 256
|
79 |
+
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
|
80 |
+
- lr_scheduler_type: cosine
|
81 |
+
- lr_scheduler_warmup_ratio: 0.1
|
82 |
+
- num_epochs: 3.0
|
83 |
+
|
84 |
+
### Framework versions
|
85 |
+
|
86 |
+
- Transformers 4.46.1
|
87 |
+
- Pytorch 2.3.0
|
88 |
+
- Datasets 3.1.0
|
89 |
+
- Tokenizers 0.20.3
|
90 |
+
|
91 |
+
More info can be found in our repository: [https://github.com/open-thoughts/open-thoughts](https://github.com/open-thoughts/open-thoughts).
|
92 |
+
|
93 |
+
# Citation
|
94 |
+
```
|
95 |
+
@misc{openthoughts,
|
96 |
+
author = {Team, OpenThoughts},
|
97 |
+
month = jan,
|
98 |
+
title = {{Open Thoughts}},
|
99 |
+
howpublished = {https://open-thoughts.ai},
|
100 |
+
year = {2025}
|
101 |
+
}
|
102 |
+
```
|
103 |
+
|
104 |
+
# Links
|
105 |
+
- π [Open Thoughts Launch Blog Post](https://www.open-thoughts.ai/blog/launch)
|
106 |
+
- π [Open Thoughts Measuring Reasoning with Evalchmey Blog Post](https://www.open-thoughts.ai/blog/measure)
|
107 |
+
- π [Open Thoughts OpenThinker-32B Post](https://www.open-thoughts.ai/blog/scale)
|
108 |
+
- π» [Open Thoughts GitHub Repository](https://github.com/open-thoughts/open-thoughts)
|
109 |
+
- π§ [OpenThoughts-114k dataset](https://huggingface.co/datasets/open-thoughts/OpenThoughts-114k)
|
110 |
+
- π§ [OpenThoughts-Unverified-173k dataset](https://huggingface.co/datasets/open-thoughts/OpenThoughts-Unverified-173k)
|
111 |
+
- π€ [OpenThinker-7B model](https://huggingface.co/open-thoughts/OpenThinker-7B)
|
112 |
+
- π€ [OpenThinker-7B-Unverfied model](https://huggingface.co/open-thoughts/OpenThinker-7B-Unverified)
|
113 |
+
- π€ [OpenThinker-32B model](https://huggingface.co/open-thoughts/OpenThinker-32B) - this model
|
114 |
+
- π€ [OpenThinker-32B-Unverified model](https://huggingface.co/open-thoughts/OpenThinker-32B-Unverified)
|
measurement.json
ADDED
The diff for this file is too large to render.
See raw diff
|
|