Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up

Project Prevail

Team
community
https://bit.ly/project-prevail
Prevail-Safe-AI
Activity Feed

AI & ML interests

None defined yet.

Sergei Smirnov's profile picture Tianyi Qiu's profile picture

TianyiQ 
authored 5 papers over 1 year ago

PKU-SafeRLHF: A Safety Alignment Preference Dataset for Llama Family Models

Paper • 2406.15513 • Published Jun 20, 2024 • 1

ProgressGym: Alignment with a Millennium of Moral Progress

Paper • 2406.20087 • Published Jun 28, 2024 • 4

AI Alignment: A Comprehensive Survey

Paper • 2310.19852 • Published Oct 30, 2023

Language Models Resist Alignment

Paper • 2406.06144 • Published Jun 10, 2024

Reward Generalization in RLHF: A Topological Perspective

Paper • 2402.10184 • Published Feb 15, 2024
Company
TOS Privacy About Careers
Website
Models Datasets Spaces Pricing Docs