PhoenixZ commited on
Commit
85dfb96
·
verified ·
1 Parent(s): 6f51a13

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -23,7 +23,7 @@ By applying DPO stage using OmniAlign-V-DPO datasets, we can further improve the
23
  ### Performance
24
  By integrating OmniAlign-V-DPO datasets in DPO stage, we can further improve the alignment of MLLMs with human preference. Our LLaVANext-OA-32B-DPO even surpasses Qwen2VL-72B on MM-AlignBench.
25
 
26
- | Model | Win Rate $\uparrow$ | Reward$\uparrow$ | Better+ | Better | Tie | Worse | Worse+ |
27
  |-------------------------------|------------------------------|---------------------------|------------|-----|----|-----|-----|
28
  | Claude3.5V-Sonnet | 84.9 | +51.4 | 70 | 144 | 12 | 31 | 4 |
29
  | GPT-4o | 81.3 | +49.0 | 81 | 124 | 12 | 31 | 4 |
 
23
  ### Performance
24
  By integrating OmniAlign-V-DPO datasets in DPO stage, we can further improve the alignment of MLLMs with human preference. Our LLaVANext-OA-32B-DPO even surpasses Qwen2VL-72B on MM-AlignBench.
25
 
26
+ | Model | Win Rate | Reward | Better+ | Better | Tie | Worse | Worse+ |
27
  |-------------------------------|------------------------------|---------------------------|------------|-----|----|-----|-----|
28
  | Claude3.5V-Sonnet | 84.9 | +51.4 | 70 | 144 | 12 | 31 | 4 |
29
  | GPT-4o | 81.3 | +49.0 | 81 | 124 | 12 | 31 | 4 |