nielsr HF Staff commited on
Commit
e067588
Β·
verified Β·
1 Parent(s): eced4e6

Add model card metadata and description for MM-EUREKA

Browse files

This PR adds missing model card metadata including `library_name`, `pipeline_tag`, and `license`, improving discoverability and clarity. It also adds a model description section to the README, as well as adding a link to the paper.

Files changed (1) hide show
  1. README.md +185 -0
README.md ADDED
@@ -0,0 +1,185 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ library_name: transformers
4
+ pipeline_tag: image-text-to-text
5
+ ---
6
+
7
+ # MM-EUREKA: Exploring Visual Aha Moment with Rule-based Large-scale Reinforcement Learning
8
+
9
+ <div align="center">
10
+ <img alt="MM-Eureka logo" src="./docs/logo.png" style="height: 200px;" />
11
+ </div>
12
+
13
+ <div align="center">
14
+
15
+ # MM-EUREKA
16
+
17
+ </div>
18
+
19
+ <div align="center">
20
+ <p align="center">
21
+ πŸ“–<a href="https://github.com/ModalMinds/MM-EUREKA/blob/main/MM_Eureka_paper.pdf">Paper</a> |
22
+ πŸ“Š<a href="https://huggingface.co/datasets/FanqingM/MM-Eureka-Dataset">Datasets</a> |
23
+ πŸ€—<a href="https://huggingface.co/FanqingM/MM-Eureka-8B">MM-Eureka-8B</a> |
24
+ πŸ€—<a href="https://huggingface.co/FanqingM/MM-Eureka-Zero-38B">MM-Eureka-Zero-38B</a>
25
+ </p>
26
+ </div>
27
+
28
+ <hr>
29
+ <div align="center">
30
+ <p style="text-align: center;">MM-EUREKA: Exploring Visual Aha Moment with Rule-based Large-scale Reinforcement Learning<p>
31
+ </div>
32
+ <hr>
33
+ <div align="center">
34
+ <a href="https://github.com/ModalMinds/MM-EUREKA/blob/main/MM_Eureka_paper.pdf">[[Paper PDF Link]]</a>
35
+ </div>
36
+
37
+ <div align="center">
38
+ <img alt="Visual Aha Moment" src="./docs/visual_aha_moment.png"/>
39
+ </div>
40
+
41
+ ## Model Description
42
+
43
+ MM-Eureka and MM-Eureka-Zero are a series of multimodal reasoning models trained using rule-based large-scale reinforcement learning. These models exhibit strong multimodal reasoning capabilities. Both instruction-tuned and pre-trained models were successfully trained. They are presented in the paper [Proposer-Agent-Evaluator(PAE): Autonomous Skill Discovery For Foundation Model Internet Agents](https://huggingface.co/papers/2503.07365).
44
+ The code can be found at https://github.com/ModalMinds/MM-EUREKA.
45
+
46
+ ## πŸ—žοΈ News
47
+
48
+ - **[2025/03/07]** We released `MM-Eureka`.
49
+ - πŸ“– Paper: [MM-EUREKA-paper](https://github.com/ModalMinds/MM-EUREKA/blob/main/MM_Eureka_paper.pdf)
50
+ - πŸ€— Model: [MM-Eureka-8B](https://huggingface.co/FanqingM/MM-Eureka-8B) & [MM-Eureka-Zero-38B](https://huggingface.co/FanqingM/MM-Eureka-Zero-38B)
51
+ - πŸ“Š Dataset: [MM-Eureka-Dataset](https://huggingface.co/datasets/FanqingM/MM-Eureka-Dataset)
52
+
53
+ ## πŸš€ Features
54
+
55
+ This repository is built upon [OpenRLHF](https://github.com/OpenRLHF/OpenRLHF), introducing several key enhancements:
56
+
57
+ - **Multimodal RFT Support**: Extends OpenRLHF to incorporate **vision-language models (VLMs)**, currently supporting **InternVL**, enabling multimodal reasoning capabilities.
58
+ - Currently support **RLOO**, **REINFORCE++**, **GRPO** training using Ray.
59
+ - vLLM integration and distributed training.
60
+ - Support hybrid engine (`--colocate_all_models`, `--vllm_enable_sleep`).
61
+ - **Better Rule-based Reward support**: Better training visualization for Rule-based Rewards (i.g. Format Reward, Accuracy Reward, Repetition Penalty)
62
+ - **Online Filtering**: Filtering out experiences based on Accuracy Reward during training as in [PRIME](https://github.com/PRIME-RL/PRIME)
63
+ - Use `--enable_accuracy_filter`, `--freezing_filter_steps`, `--accuracy_lower_bound`, `--accuracy_upper_bound` to control the behavior of online accuracy filter.
64
+ - Online Accuracy filter is not currently enabled in our default settings, refer to the Disccusion Section in our [paper](https://github.com/ModalMinds/MM-EUREKA/blob/main/MM_Eureka_paper.pdf) for more details.
65
+
66
+ ## πŸ€– Models
67
+
68
+ <div align="center">
69
+ <img alt="Training Log" src="./docs/training_log.png"/>
70
+ </div>
71
+ *Figure 1 | Train Time Scale-up on Accuracy Reward and Response Length of Rule-Based RL. (a) represents the training scenario on InternVL2.5-instruct-8B, while (b) corresponds to the training scenario on InternVL2.5-pretrained-38B. It can be observed that stable improvements in accuracy reward and response length can be achieved regardless of whether the model is based on an instruct model or a pretrained model.*
72
+
73
+ - πŸ€— [MM-Eureka-8B](https://huggingface.co/FanqingM/MM-Eureka-8B)
74
+
75
+ - πŸ€— [MM-Eureka-Zero-38B](https://huggingface.co/FanqingM/MM-Eureka-Zero-38B)
76
+
77
+ ## 🏁 Getting Started
78
+
79
+ ### πŸ“¦ Installation
80
+
81
+ ```shell
82
+ git clone https://github.com/ModalMinds/MM-EUREKA.git
83
+ cd MM-EUREKA
84
+ pip install -e .[vllm]
85
+
86
+ # install flash-attn==2.3.6:
87
+
88
+ pip install flash-attn==2.3.6 --no-build-isolation
89
+
90
+ # Alternatively you can compile from source:
91
+
92
+ git clone https://github.com/Dao-AILab/flash-attention.git
93
+ cd flash-attention
94
+ git checkout v2.3.6
95
+ python setup.py install
96
+ ```
97
+
98
+ ### πŸ“‚ Data Preparation
99
+
100
+ You can download our training data from [MM-Eureka-Dataset](https://huggingface.co/datasets/FanqingM/MM-Eureka-Dataset)
101
+
102
+ Once downloaded, refer to the section below for additional data formation. You may need to update the `image_urls` field to reference your local image paths for proper processing.
103
+
104
+ #### Custom dataset
105
+
106
+ For custom dataset, format your data in to a JSONL file, where each entry is a dictionary organized in the following format.
107
+
108
+ ```json
109
+ {
110
+ "id": "0",
111
+ "conversations": [
112
+ {
113
+ "role": "system",
114
+ "content": "system_prompt"
115
+ },
116
+ {
117
+ "role": "user",
118
+ "content": "user_prompt"
119
+ }
120
+ ],
121
+ "answer": "gt that could be parsed and verified by math_verify",
122
+ "image_urls": ["file:///path/to/image1", "file:///path/to/image2"]
123
+ }
124
+ ```
125
+
126
+ > [!NOTE]
127
+ > For text-only inputs, we follow InternVL's official approach, which requires a dummy image input.
128
+ > Specifically, you should provide a (224, 224) pure white image as a placeholder.
129
+ > We have already provided such a blank image at: `examples/blank.png`
130
+
131
+ ### 🌐 Start Training
132
+
133
+ Before starting your own training, ensure that the paths in the provided training scripts are correctly set and that environment variables like `$MASTER_ADDR` and `$NODE_RANK` are properly configured.
134
+
135
+ **start MM-Eureka-8B training**
136
+
137
+ - for single node
138
+
139
+ ```shell
140
+ sh examples/scripts/train_mm_eureka_8b_single_node.sh
141
+ ```
142
+
143
+ - for multiple node
144
+
145
+ ```shell
146
+ sh examples/scripts/train_mm_eureka_8b_multi_node.sh
147
+ ```
148
+
149
+ **start MM-Eureka-Zero-38B training**
150
+
151
+ ```shell
152
+ sh examples/scripts/train_mm_eureka_zero_38b_multi_node.sh
153
+ ```
154
+
155
+ ## ⭐ Starchart
156
+
157
+ [![Star History Chart](https://api.star-history.com/svg?repos=ModalMinds/MM-EUREKA&type=Date)](https://star-history.com/#ModalMinds/MM-EUREKA&Date)
158
+
159
+ ## 🀝 Contribution
160
+
161
+ MM-Eureka is stil under active development, if you want to contribute, please feel free to make a pull request or create an issue.
162
+
163
+ Please refer to `CONTRIBUTING.md` before you dive in!
164
+
165
+ ## πŸ“¬ Contact
166
+
167
+ If you have any questions or would like to engage with our community, feel free to scan the QR code below to join our WeChat group.
168
+
169
+ <div align="center">
170
+ <img alt="MM-Eureka logo" src="https://github.com/user-attachments/assets/a04ebfef-9ac4-44ae-a07b-48586794903a" style="height: 400px;" />
171
+ </div>
172
+
173
+ ## πŸŽ“ Acknowledgements
174
+
175
+ We acknowledge the outstanding open-source contributions from [OpenRLHF](https://github.com/OpenRLHF/OpenRLHF), [LMM-R1](https://github.com/TideDra/lmm-r1) and [vLLM](https://github.com/vllm-project/vllm). We also extend our gratitude to [DeepSeek-R1](https://github.com/deepseek-ai/DeepSeek-R1) and [InternVL](https://github.com/OpenGVLab/InternVL) for their open-source techniques and base models, which have enabled us to further our exploration.
176
+
177
+ ## πŸ“œ Citation
178
+ ```
179
+ @misc{MM-EUREKA2025,
180
+ title={MM-EUREKA: Exploring Visual Aha Moment with Rule-Based Large-Scale Reinforcement Learning},
181
+ author={Fanqing Meng and Lingxiao Du and Zongkai Liu and Zhixiang Zhou and Quanfeng Lu and Daocheng Fu and Botian Shi and Wenhai Wang and Junjun He and Kaipeng Zhang and Ping Luo and Yu Qiao and Qiaosheng Zhang and Wenqi Shao},
182
+ year={2025},
183
+ howpublished={\url{https://github.com/ModalMinds/MM-EUREKA}},
184
+ }
185
+ ```