File size: 875 Bytes
96e6337
b300222
 
96e6337
 
b300222
96e6337
b300222
96e6337
b300222
 
 
 
96e6337
b300222
96e6337
b300222
 
 
96e6337
b300222
 
 
96e6337
b300222
 
 
96e6337
 
b300222
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
---
base_model: Qwen/Qwen3-8B
library_name: peft
---

# LoRA Adapter for SAE Introspection

This is a LoRA (Low-Rank Adaptation) adapter trained for SAE (Sparse Autoencoder) introspection tasks.

## Base Model
- **Base Model**: `Qwen/Qwen3-8B`
- **Adapter Type**: LoRA
- **Task**: SAE Feature Introspection

## Usage

```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel

# Load base model and tokenizer
base_model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen3-8B")
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen3-8B")

# Load LoRA adapter
model = PeftModel.from_pretrained(base_model, "thejaminator/qwen-hook-layer-9-step-2000")
```

## Training Details
This adapter was trained using the lightweight SAE introspection training script to help the model understand and explain SAE features through activation steering.