Papers
arxiv:2506.10920

Decomposing MLP Activations into Interpretable Features via Semi-Nonnegative Matrix Factorization

Published on Jun 12
· Submitted by ordavids1 on Jun 13

Abstract

SNMF is used to identify interpretable features in LLMs by directly decomposing MLP activations, outperforming SAEs and supervised methods in causal evaluations and aligning with human-interpretable concepts.

AI-generated summary

A central goal for mechanistic interpretability has been to identify the right units of analysis in large language models (LLMs) that causally explain their outputs. While early work focused on individual neurons, evidence that neurons often encode multiple concepts has motivated a shift toward analyzing directions in activation space. A key question is how to find directions that capture interpretable features in an unsupervised manner. Current methods rely on dictionary learning with sparse autoencoders (SAEs), commonly trained over residual stream activations to learn directions from scratch. However, SAEs often struggle in causal evaluations and lack intrinsic interpretability, as their learning is not explicitly tied to the computations of the model. Here, we tackle these limitations by directly decomposing MLP activations with semi-nonnegative matrix factorization (SNMF), such that the learned features are (a) sparse linear combinations of co-activated neurons, and (b) mapped to their activating inputs, making them directly interpretable. Experiments on Llama 3.1, Gemma 2 and GPT-2 show that SNMF derived features outperform SAEs and a strong supervised baseline (difference-in-means) on causal steering, while aligning with human-interpretable concepts. Further analysis reveals that specific neuron combinations are reused across semantically-related features, exposing a hierarchical structure in the MLP's activation space. Together, these results position SNMF as a simple and effective tool for identifying interpretable features and dissecting concept representations in LLMs.

Community

Paper author Paper submitter

Excited to share our paper on decomposing MLP activations into interpretable features. We introduce a simple, unsupervised method based on semi-nonnegative matrix factorization (SNMF) that extracts sparse, compositional features from MLP layers. SNMF is easy to train on both small and larger datasets. Due to its linear decomposition, SNMF may be recursively applied to reveal a natural hierarchy in the MLP neurons. Semantically related features reuse a shared set of neurons that causally represent their overarching concept. Furthermore, our features outperform SAEs and strong supervised baselines on causal steering, while aligning closely with human-interpretable structure.

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2506.10920 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2506.10920 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2506.10920 in a Space README.md to link it from this page.

Collections including this paper 1