johnr0
's Collections
Chain-of-Verification Reduces Hallucination in Large Language Models
Paper
•
2309.11495
•
Published
•
38
EIPE-text: Evaluation-Guided Iterative Plan Extraction for Long-Form
Narrative Text Generation
Paper
•
2310.08185
•
Published
•
6
The Consensus Game: Language Model Generation via Equilibrium Search
Paper
•
2310.09139
•
Published
•
12
In-Context Pretraining: Language Modeling Beyond Document Boundaries
Paper
•
2310.10638
•
Published
•
28
Reward-Augmented Decoding: Efficient Controlled Text Generation With a
Unidirectional Reward Model
Paper
•
2310.09520
•
Published
•
10
Self-RAG: Learning to Retrieve, Generate, and Critique through
Self-Reflection
Paper
•
2310.11511
•
Published
•
74
VeRA: Vector-based Random Matrix Adaptation
Paper
•
2310.11454
•
Published
•
28
Safe RLHF: Safe Reinforcement Learning from Human Feedback
Paper
•
2310.12773
•
Published
•
28
In-Context Learning Creates Task Vectors
Paper
•
2310.15916
•
Published
•
41
Deja Vu: Contextual Sparsity for Efficient LLMs at Inference Time
Paper
•
2310.17157
•
Published
•
11
Controlled Decoding from Language Models
Paper
•
2310.17022
•
Published
•
14
Tell Your Model Where to Attend: Post-hoc Attention Steering for LLMs
Paper
•
2311.02262
•
Published
•
10
S-LoRA: Serving Thousands of Concurrent LoRA Adapters
Paper
•
2311.03285
•
Published
•
28
Prompt Cache: Modular Attention Reuse for Low-Latency Inference
Paper
•
2311.04934
•
Published
•
28
System 2 Attention (is something you might need too)
Paper
•
2311.11829
•
Published
•
39
Adapters: A Unified Library for Parameter-Efficient and Modular Transfer
Learning
Paper
•
2311.11077
•
Published
•
24
Tuning Language Models by Proxy
Paper
•
2401.08565
•
Published
•
20
Self-Rewarding Language Models
Paper
•
2401.10020
•
Published
•
143
Collaborative Development of NLP models
Paper
•
2305.12219
•
Published
Suppressing Pink Elephants with Direct Principle Feedback
Paper
•
2402.07896
•
Published
•
9
A Tale of Tails: Model Collapse as a Change of Scaling Laws
Paper
•
2402.07043
•
Published
•
13
Direct Language Model Alignment from Online AI Feedback
Paper
•
2402.04792
•
Published
•
29