Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Posts
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up
heegyu 's Collections
R1-like Datasets
Korean Reward Modeling
Korean Pretraining Dataset
AjouBlue GPTs
Datasets Translated to Korean
Synthetic Dataset
RLHF papers
Reward Modeling Datasets
Pre-training Dataset
Vision LM
Image Generation
Domain Specific (Math, Code, etc)
Machine Translation
Safety LM
Text2SQL

RLHF papers

updated Nov 19, 2024
Upvote
-

  • Pairwise Proximal Policy Optimization: Harnessing Relative Feedback for LLM Alignment

    Paper • 2310.00212 • Published Sep 30, 2023 • 2

  • Stabilizing RLHF through Advantage Model and Selective Rehearsal

    Paper • 2309.10202 • Published Sep 18, 2023 • 11

  • Aligning Language Models with Offline Reinforcement Learning from Human Feedback

    Paper • 2308.12050 • Published Aug 23, 2023 • 1

  • Secrets of RLHF in Large Language Models Part I: PPO

    Paper • 2307.04964 • Published Jul 11, 2023 • 29

  • Self-Rewarding Language Models

    Paper • 2401.10020 • Published Jan 18, 2024 • 148
Upvote
-
  • Collection guide
  • Browse collections
Company
TOS Privacy About Jobs
Website
Models Datasets Spaces Pricing Docs