SAEs $\textit{Can}$ Improve Unlearning: Dynamic Sparse Autoencoder Guardrails for Precision Unlearning in LLMs Paper • 2504.08192 • Published Apr 11 • 4
Position: Mechanistic Interpretability Should Prioritize Feature Consistency in SAEs Paper • 2505.20254 • Published 4 days ago • 5
Position: Mechanistic Interpretability Should Prioritize Feature Consistency in SAEs Paper • 2505.20254 • Published 4 days ago • 5
Position: Mechanistic Interpretability Should Prioritize Feature Consistency in SAEs Paper • 2505.20254 • Published 4 days ago • 5 • 1
Running 2.63k 2.63k The Ultra-Scale Playbook 🌌 The ultimate guide to training LLM on large GPU Clusters
CoRAG: Collaborative Retrieval-Augmented Generation Paper • 2504.01883 • Published Apr 2 • 10 • 2
SAEs Can Improve Unlearning: Dynamic Sparse Autoencoder Guardrails for Precision Unlearning in LLMs Paper • 2504.08192 • Published Apr 11 • 4
SAEs $\textit{Can}$ Improve Unlearning: Dynamic Sparse Autoencoder Guardrails for Precision Unlearning in LLMs Paper • 2504.08192 • Published Apr 11 • 4 • 2
Decoding Dark Matter: Specialized Sparse Autoencoders for Interpreting Rare Concepts in Foundation Models Paper • 2411.00743 • Published Nov 1, 2024 • 7
Decoding Dark Matter: Specialized Sparse Autoencoders for Interpreting Rare Concepts in Foundation Models Paper • 2411.00743 • Published Nov 1, 2024 • 7
Decoding Dark Matter: Specialized Sparse Autoencoders for Interpreting Rare Concepts in Foundation Models Paper • 2411.00743 • Published Nov 1, 2024 • 7 • 2