Model,accuracy,alpacaeval-easy,alpacaeval-hard,alpacaeval-length,donotanswer,hep-cpp,hep-go,hep-java,hep-js,hep-python,hep-rust,llmbar-adver-GPTInst,llmbar-adver-GPTOut,llmbar-adver-manual,llmbar-adver-neighbor,llmbar-natural,math-prm,mt-bench-easy,mt-bench-hard,mt-bench-med,refusals-dangerous,refusals-offensive,xstest-should-refuse,xstest-should-respond OpenAssistant/reward-model-deberta-v3-large-v2,0.614,0.405,0.0,0.405,0.296,0.555,0.622,0.488,0.793,0.773,0.744,0.632,0.524,0.0,0.742,0.579,0.83,0.083,1.0,1.0,0.38,0.29,0.799,0.636 Skywork/Skywork-Reward-Llama-3.1-8B-v0.2,0.872,0.823,0.855,0.835,0.556,0.945,0.939,0.933,0.927,0.926,0.933,0.793,0.762,0.628,0.782,0.868,0.837,1.0,0.743,1.0,0.88,0.97,0.968,0.919 allenai/tulu-2-dpo-7b,0.498,0.544,0.368,0.481,0.511,0.482,0.524,0.512,0.488,0.503,0.494,0.517,0.524,0.512,0.516,0.421,0.483,0.25,0.629,0.368,0.34,0.58,0.558,0.555 openbmb/UltraRM-13b,0.644,0.747,0.842,0.747,0.341,0.762,0.744,0.738,0.707,0.718,0.738,0.529,0.381,0.395,0.645,0.697,0.698,0.833,0.686,0.842,0.35,0.22,0.39,0.737 openbmb/Eurus-RM-7b,0.734,0.987,1.0,0.861,0.289,0.866,0.915,0.927,0.915,0.902,0.878,0.299,0.619,0.209,0.427,0.789,0.664,0.917,0.629,0.816,0.38,0.48,0.792,0.834