Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
@@ -23,7 +23,7 @@ By applying DPO stage using OmniAlign-V-DPO datasets, we can further improve the
|
|
23 |
### Performance
|
24 |
By integrating OmniAlign-V-DPO datasets in DPO stage, we can further improve the alignment of MLLMs with human preference. Our LLaVANext-OA-32B-DPO even surpasses Qwen2VL-72B on MM-AlignBench.
|
25 |
|
26 |
-
| Model | Win Rate
|
27 |
|-------------------------------|------------------------------|---------------------------|------------|-----|----|-----|-----|
|
28 |
| Claude3.5V-Sonnet | 84.9 | +51.4 | 70 | 144 | 12 | 31 | 4 |
|
29 |
| GPT-4o | 81.3 | +49.0 | 81 | 124 | 12 | 31 | 4 |
|
|
|
23 |
### Performance
|
24 |
By integrating OmniAlign-V-DPO datasets in DPO stage, we can further improve the alignment of MLLMs with human preference. Our LLaVANext-OA-32B-DPO even surpasses Qwen2VL-72B on MM-AlignBench.
|
25 |
|
26 |
+
| Model | Win Rate | Reward | Better+ | Better | Tie | Worse | Worse+ |
|
27 |
|-------------------------------|------------------------------|---------------------------|------------|-----|----|-----|-----|
|
28 |
| Claude3.5V-Sonnet | 84.9 | +51.4 | 70 | 144 | 12 | 31 | 4 |
|
29 |
| GPT-4o | 81.3 | +49.0 | 81 | 124 | 12 | 31 | 4 |
|