LLM-jp-3-1.8B SAE

This repository provides a TopK Sparse Autoencoder (SAE) trained on LLM-jp-3-1.8B, developed by the Research and Development Center for Large Language Models at the National Institute of Informatics, Japan.

Usage

Python version: 3.10.12

See the README.md in the github repository for the usage.

Downloads last month
3
Safetensors
Model size
134M params
Tensor type
BF16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Collection including llm-jp/llm-jp-3-1.8b-sae-l12-k32-16x-c988240