Decomposing MLP Activations into Interpretable Features via Semi-Nonnegative Matrix Factorization
Abstract
SNMF is used to identify interpretable features in LLMs by directly decomposing MLP activations, outperforming SAEs and supervised methods in causal evaluations and aligning with human-interpretable concepts.
A central goal for mechanistic interpretability has been to identify the right units of analysis in large language models (LLMs) that causally explain their outputs. While early work focused on individual neurons, evidence that neurons often encode multiple concepts has motivated a shift toward analyzing directions in activation space. A key question is how to find directions that capture interpretable features in an unsupervised manner. Current methods rely on dictionary learning with sparse autoencoders (SAEs), commonly trained over residual stream activations to learn directions from scratch. However, SAEs often struggle in causal evaluations and lack intrinsic interpretability, as their learning is not explicitly tied to the computations of the model. Here, we tackle these limitations by directly decomposing MLP activations with semi-nonnegative matrix factorization (SNMF), such that the learned features are (a) sparse linear combinations of co-activated neurons, and (b) mapped to their activating inputs, making them directly interpretable. Experiments on Llama 3.1, Gemma 2 and GPT-2 show that SNMF derived features outperform SAEs and a strong supervised baseline (difference-in-means) on causal steering, while aligning with human-interpretable concepts. Further analysis reveals that specific neuron combinations are reused across semantically-related features, exposing a hierarchical structure in the MLP's activation space. Together, these results position SNMF as a simple and effective tool for identifying interpretable features and dissecting concept representations in LLMs.
Community
Excited to share our paper on decomposing MLP activations into interpretable features. We introduce a simple, unsupervised method based on semi-nonnegative matrix factorization (SNMF) that extracts sparse, compositional features from MLP layers. SNMF is easy to train on both small and larger datasets. Due to its linear decomposition, SNMF may be recursively applied to reveal a natural hierarchy in the MLP neurons. Semantically related features reuse a shared set of neurons that causally represent their overarching concept. Furthermore, our features outperform SAEs and strong supervised baselines on causal steering, while aligning closely with human-interpretable structure.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- SAEs Are Good for Steering -- If You Select the Right Features (2025)
- Beyond Input Activations: Identifying Influential Latents by Gradient Sparse Autoencoders (2025)
- Train Sparse Autoencoders Efficiently by Utilizing Features Correlation (2025)
- Line of Sight: On Linear Representations in VLLMs (2025)
- Inference-Time Decomposition of Activations (ITDA): A Scalable Approach to Interpreting Large Language Models (2025)
- Towards Interpretability Without Sacrifice: Faithful Dense Layer Decomposition with Mixture of Decoders (2025)
- Scaling sparse feature circuit finding for in-context learning (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper