reward for Non-Verifiable Queries

#12
by DaleMeng - opened

Thanks for your great work!
I noticed that in your paper <DeepDistill: Enhancing LLM Reasoning Capabilities
via Large-Scale Difficulty-Graded Data Training>, mentioned that for Multi-turn Conversations and Others rewards, your choose the Decision-Tree-Reward-Llama-3.1-8B model to evaluate three dimensions and get the final average score, so i assumed you also use the same reward model for Non-Verifiable Queries here, I want to know if is there any special reason to choose the Reward-Llama-3.1-8B model?
According to their https://rlhflow.github.io/posts/2025-01-22-decision-tree-reward-model/ technical report, it indeed achieves in the SOTA in RewardBench v1, but I believe the decision tree part plays an important part to achieve the the SOTA results for how to use the different dimension of score. But in your paper, just use the coherence + correctness + helpfulness value directly and calculate the average, maybe other weighted average method can perform better?

Sign up or log in to comment