Listener-Rewarded Thinking in VLMs for Image Preferences Paper • 2506.22832 • Published 5 days ago • 22
Inverse-and-Edit: Effective and Fast Image Editing by Cycle Consistency Models Paper • 2506.19103 • Published 10 days ago • 41
DreamBoothDPO: Improving Personalized Generation using Direct Preference Optimization Paper • 2505.20975 • Published May 27 • 36
Confidence Is All You Need: Few-Shot RL Fine-Tuning of Language Models Paper • 2506.06395 • Published 28 days ago • 124
Geopolitical biases in LLMs: what are the "good" and the "bad" countries according to contemporary language models Paper • 2506.06751 • Published 26 days ago • 72
Will It Still Be True Tomorrow? Multilingual Evergreen Question Classification to Improve Trustworthy QA Paper • 2505.21115 • Published May 27 • 134
AmbiK: Dataset of Ambiguous Tasks in Kitchen Environment Paper • 2506.04089 • Published 29 days ago • 46
view article Article Process Reinforcement through Implicit Rewards By ganqu and 1 other • Jan 3 • 29
Learn Your Reference Model for Real Good Alignment Paper • 2404.09656 • Published Apr 15, 2024 • 88
ProcessBench: Identifying Process Errors in Mathematical Reasoning Paper • 2412.06559 • Published Dec 9, 2024 • 84
I Have Covered All the Bases Here: Interpreting Reasoning Features in Large Language Models via Sparse Autoencoders Paper • 2503.18878 • Published Mar 24 • 119
One-Step Residual Shifting Diffusion for Image Super-Resolution via Distillation Paper • 2503.13358 • Published Mar 17 • 96
LLM-Microscope: Uncovering the Hidden Role of Punctuation in Context Memory of Transformers Paper • 2502.15007 • Published Feb 20 • 175
How Much Knowledge Can You Pack into a LoRA Adapter without Harming LLM? Paper • 2502.14502 • Published Feb 20 • 91
Cramming 1568 Tokens into a Single Vector and Back Again: Exploring the Limits of Embedding Space Capacity Paper • 2502.13063 • Published Feb 18 • 73
🧠Reasoning datasets Collection Datasets with reasoning traces for math and code released by the community • 24 items • Updated May 19 • 154