Datasets:
Tasks:
Other
Modalities:
Image
Formats:
imagefolder
Languages:
English
Size:
< 1K
Tags:
transformers
massive-activations
training-dynamics
pythia
neural-network-analysis
interpretability
License:
Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,252 @@
|
|
1 |
-
---
|
2 |
-
|
3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
title: "Hidden Dynamics of Massive Activations in Transformer Training"
|
3 |
+
tags:
|
4 |
+
- transformers
|
5 |
+
- massive-activations
|
6 |
+
- training-dynamics
|
7 |
+
- pythia
|
8 |
+
- neural-network-analysis
|
9 |
+
- interpretability
|
10 |
+
license: mit
|
11 |
+
task_categories:
|
12 |
+
- other
|
13 |
+
language:
|
14 |
+
- en
|
15 |
+
size_categories:
|
16 |
+
- 10K<n<100K
|
17 |
+
---
|
18 |
+
|
19 |
+
# Hidden Dynamics of Massive Activations in Transformer Training
|
20 |
+
|
21 |
+
## Dataset Description
|
22 |
+
|
23 |
+
This dataset contains comprehensive analysis data for the paper "Hidden Dynamics of Massive Activations in Transformer Training". It provides detailed measurements and mathematical characterizations of massive activation emergence patterns across the Pythia model family during training.
|
24 |
+
|
25 |
+
**Massive activations** are scalar values in transformer hidden states that achieve values orders of magnitude larger than typical activations and have been shown to be critical for model functionality. This dataset presents the first systematic study of how these phenomena emerge and evolve throughout transformer training.
|
26 |
+
|
27 |
+
## Abstract
|
28 |
+
|
29 |
+
Massive activations are scalar values in transformer hidden states that achieve values orders of magnitude larger than typical activations and have been shown to be critical for model functionality. While prior work has characterized these phenomena in fully trained models, the temporal dynamics of their emergence during training remain poorly understood. We present the first comprehensive analysis of massive activation development throughout transformer training, using the Pythia model family as our testbed. Through systematic analysis of various model sizes across multiple training checkpoints, we demonstrate that massive activation emergence follows predictable mathematical patterns that can be accurately modeled using an exponentially-modulated logarithmic function with five key parameters. We develop a machine learning framework to predict these mathematical parameters from architectural specifications alone, achieving high accuracy for steady-state behavior and moderate accuracy for emergence timing and magnitude. These findings enable architects to predict and potentially control key aspects of massive activation emergence through design choices, with significant implications for model stability, training cycle length, interpretability, and optimization. Our findings demonstrate that the emergence of massive activations is governed by model design and can be anticipated, and potentially controlled, before training begins.
|
30 |
+
|
31 |
+
## Dataset Structure
|
32 |
+
|
33 |
+
### Root Files
|
34 |
+
|
35 |
+
- **`fitted_param_dataset_reparam.csv`**: Consolidated dataset containing fitted mathematical parameters for all models and layers, along with architectural specifications.
|
36 |
+
|
37 |
+
### Model-Specific Directories
|
38 |
+
|
39 |
+
Each Pythia model has its own directory (`pythia_14m`, `pythia_70m`, `pythia_160m`, `pythia_410m`, `pythia_1b`, `pythia_1.4b`, `pythia_2.8b`, `pythia_6.9b`, `pythia_12b`) containing:
|
40 |
+
|
41 |
+
#### `/stats/`
|
42 |
+
Raw massive activation statistics for each training checkpoint. Files are named `exp2_{model}_{step}` and contain:
|
43 |
+
- **Structure**: List with dimensions `B × Q × L`
|
44 |
+
- `B`: Batch ID (10 random sequences from dataset)
|
45 |
+
- `Q`: Quantity type (4 values: top1, top2, top3, median activation)
|
46 |
+
- `L`: Layer ID
|
47 |
+
|
48 |
+
#### `/params/`
|
49 |
+
Mathematical model fitting results:
|
50 |
+
- **`layer_fit_params.json`**: Complete fitting results for all quantities (ratio, top1, median)
|
51 |
+
- **`layer_fit_params_{quantity}.json`**: Quantity-specific fitting results
|
52 |
+
- **Structure**: List where each element corresponds to a layer, containing dictionaries with keys:
|
53 |
+
- `'original'`, `'reparam'`, `'step2'`: Different mathematical hypotheses
|
54 |
+
- Each hypothesis contains: `'param_names'`, `'popt'`, `'pcov'`, `'r2'`, `'aic'`, `'residuals'`
|
55 |
+
|
56 |
+
#### `/series/`
|
57 |
+
Time series plots showing extracted quantities across training steps per layer:
|
58 |
+
- `magnitudes.png`: Overall magnitude evolution
|
59 |
+
- `median.png`: Median activation evolution
|
60 |
+
- `ratio.png`: Top1/median ratio evolution
|
61 |
+
- `top1.png`: Top1 activation evolution
|
62 |
+
|
63 |
+
#### `/per_layer_evolution/`
|
64 |
+
Visualizations of massive activation patterns at each training step, showing layer-by-layer evolution.
|
65 |
+
|
66 |
+
#### `/example_fits/`
|
67 |
+
Selected mathematical model fits for representative layers (shallow, middle, deep) organized by quantity type (`median/`, `ratio/`, `top1/`).
|
68 |
+
|
69 |
+
#### `/metrics/`
|
70 |
+
Model fitting quality metrics:
|
71 |
+
- `r2_{quantity}.png`: R² values across layers
|
72 |
+
- `aic_{quantity}.png`: AIC values across layers
|
73 |
+
|
74 |
+
## Mathematical Framework
|
75 |
+
|
76 |
+
The dataset captures massive activation evolution using an **exponentially-modulated logarithmic function**:
|
77 |
+
|
78 |
+
```
|
79 |
+
f(x) = exp(-β × x) × (A₁ × log(x + τ₀) + A₂) + K
|
80 |
+
```
|
81 |
+
|
82 |
+
### Parameters in `fitted_param_dataset_reparam.csv`:
|
83 |
+
|
84 |
+
- **`param_A`** (A₁): Log amplitude coefficient
|
85 |
+
- **`param_λ`** (A₂): Pure exponential amplitude
|
86 |
+
- **`param_γ`** (β): Decay rate
|
87 |
+
- **`param_t0`** (τ₀): Horizontal shift parameter
|
88 |
+
- **`param_K`**: Asymptotic baseline value
|
89 |
+
|
90 |
+
### Architectural Features:
|
91 |
+
- `model`: Model name (e.g., pythia_160m)
|
92 |
+
- `layer_index`: Absolute layer position (0-indexed)
|
93 |
+
- `layer_index_norm`: Normalized layer depth (0-1)
|
94 |
+
- `num_hidden_layers`: Total number of layers
|
95 |
+
- `hidden_size`: Hidden dimension size
|
96 |
+
- `intermediate_size`: Feed-forward intermediate size
|
97 |
+
- `num_attention_heads`: Number of attention heads
|
98 |
+
- Additional architectural parameters
|
99 |
+
|
100 |
+
## Models Covered
|
101 |
+
|
102 |
+
The dataset includes analysis for the complete Pythia model family:
|
103 |
+
|
104 |
+
| Model | Parameters | Layers | Hidden Size | Intermediate Size |
|
105 |
+
|-------|------------|--------|-------------|-------------------|
|
106 |
+
| pythia-14m | 14M | 6 | 128 | 512 |
|
107 |
+
| pythia-70m | 70M | 6 | 512 | 2048 |
|
108 |
+
| pythia-160m | 160M | 12 | 768 | 3072 |
|
109 |
+
| pythia-410m | 410M | 24 | 1024 | 4096 |
|
110 |
+
| pythia-1b | 1B | 16 | 2048 | 8192 |
|
111 |
+
| pythia-1.4b | 1.4B | 24 | 2048 | 8192 |
|
112 |
+
| pythia-2.8b | 2.8B | 32 | 2560 | 10240 |
|
113 |
+
| pythia-6.9b | 6.9B | 32 | 4096 | 16384 |
|
114 |
+
| pythia-12b | 12B | 36 | 5120 | 20480 |
|
115 |
+
|
116 |
+
## Training Checkpoints
|
117 |
+
|
118 |
+
Analysis covers the complete Pythia training sequence with 39 checkpoints from initialization to convergence:
|
119 |
+
- Early steps: 0, 1, 2, 4, 8, 16, 32, 64, 128, 256, 512
|
120 |
+
- Regular intervals: 1K, 2K, 4K, 6K, 8K, 10K, 12K, 14K, 16K, 20K, 24K, 28K, 32K, 36K, 40K
|
121 |
+
- Late training: 50K, 60K, 70K, 80K, 90K, 100K, 110K, 120K, 130K, 140K, 143K (final)
|
122 |
+
|
123 |
+
## Usage Examples
|
124 |
+
|
125 |
+
### Loading the Consolidated Parameter Dataset
|
126 |
+
```python
|
127 |
+
import pandas as pd
|
128 |
+
|
129 |
+
# Load fitted parameters with architectural features
|
130 |
+
df = pd.read_csv("fitted_param_dataset_reparam.csv")
|
131 |
+
|
132 |
+
# Filter by model size
|
133 |
+
pythia_1b_data = df[df['model'] == 'pythia_1b']
|
134 |
+
|
135 |
+
# Analyze parameter trends by layer depth
|
136 |
+
import matplotlib.pyplot as plt
|
137 |
+
plt.scatter(df['layer_index_norm'], df['param_A'])
|
138 |
+
plt.xlabel('Normalized Layer Depth')
|
139 |
+
plt.ylabel('Parameter A (Log Amplitude)')
|
140 |
+
plt.show()
|
141 |
+
```
|
142 |
+
|
143 |
+
### Loading Raw Statistics
|
144 |
+
```python
|
145 |
+
import json
|
146 |
+
import numpy as np
|
147 |
+
|
148 |
+
# Load raw activation statistics for a specific checkpoint
|
149 |
+
with open("pythia_160m/stats/exp2_pythia_160m_step1000", 'r') as f:
|
150 |
+
stats = eval(f.read()) # B x Q x L array
|
151 |
+
|
152 |
+
# Extract top1 activations (Q=0) for all layers
|
153 |
+
top1_activations = stats[:, 0, :] # Shape: (batch_size, num_layers)
|
154 |
+
```
|
155 |
+
|
156 |
+
### Loading Fitted Parameters
|
157 |
+
```python
|
158 |
+
# Load complete fitting results for a model
|
159 |
+
with open("pythia_160m/params/layer_fit_params.json", 'r') as f:
|
160 |
+
fit_results = json.load(f)
|
161 |
+
|
162 |
+
# Access reparam model results for layer 5
|
163 |
+
layer_5_reparam = fit_results[5]['reparam']
|
164 |
+
fitted_params = layer_5_reparam['popt']
|
165 |
+
r2_score = layer_5_reparam['r2']
|
166 |
+
```
|
167 |
+
|
168 |
+
## Applications
|
169 |
+
|
170 |
+
This dataset enables research in:
|
171 |
+
|
172 |
+
1. **Predictive Modeling**: Train ML models to predict massive activation parameters from architectural specifications
|
173 |
+
2. **Training Dynamics**: Understand how model design choices affect activation emergence patterns
|
174 |
+
3. **Model Interpretability**: Analyze the functional role of massive activations across different architectures
|
175 |
+
4. **Optimization**: Develop training strategies that account for massive activation dynamics
|
176 |
+
5. **Architecture Design**: Make informed decisions about model design based on predicted activation patterns
|
177 |
+
|
178 |
+
## Citation
|
179 |
+
|
180 |
+
If you use this dataset, please cite:
|
181 |
+
|
182 |
+
```bibtex
|
183 |
+
@article{massive_activations_dynamics,
|
184 |
+
title={Hidden Dynamics of Massive Activations in Transformer Training},
|
185 |
+
author={[Your Name]},
|
186 |
+
journal={[Journal/Conference]},
|
187 |
+
year={2024}
|
188 |
+
}
|
189 |
+
```
|
190 |
+
|
191 |
+
## License
|
192 |
+
|
193 |
+
This dataset is released under the MIT License.
|
194 |
+
|
195 |
+
## Data Quality and Validation
|
196 |
+
|
197 |
+
### Statistical Coverage
|
198 |
+
- **39 training checkpoints** per model (from initialization to convergence)
|
199 |
+
- **10 random sequences** per checkpoint for statistical robustness
|
200 |
+
- **4 activation quantities** tracked: top1, top2, top3, median
|
201 |
+
- **Complete layer coverage** for all 9 Pythia model sizes
|
202 |
+
|
203 |
+
### Model Fitting Quality
|
204 |
+
- **R² scores** typically > 0.95 for well-behaved layers
|
205 |
+
- **Multiple mathematical hypotheses** tested per layer:
|
206 |
+
- `original`: Standard exponentially-modulated logarithmic model
|
207 |
+
- `reparam`: Reparameterized version for numerical stability
|
208 |
+
- `original_regularized` / `reparam_regularized`: Regularized variants
|
209 |
+
- `step2`: Alternative parameterization
|
210 |
+
- **AIC values** provided for model selection
|
211 |
+
- **Residual analysis** included for fit quality assessment
|
212 |
+
|
213 |
+
### Data Integrity
|
214 |
+
- All raw statistics files contain validated activation measurements
|
215 |
+
- Parameter fits include covariance matrices for uncertainty quantification
|
216 |
+
- Cross-validation performed across different mathematical formulations
|
217 |
+
- Outlier detection and handling documented in fitting procedures
|
218 |
+
|
219 |
+
## Technical Details
|
220 |
+
|
221 |
+
### Computational Requirements
|
222 |
+
- **Storage**: <2GB total dataset size
|
223 |
+
- **Memory**: Minimal requirements for loading individual files
|
224 |
+
- **Processing**: Standard scientific Python stack (pandas, numpy, matplotlib)
|
225 |
+
|
226 |
+
### File Formats
|
227 |
+
- **CSV**: Tabular data with standard pandas compatibility
|
228 |
+
- **JSON**: Structured parameter data with nested dictionaries
|
229 |
+
- **PNG**: High-resolution plots (300 DPI) for visualization
|
230 |
+
- **Raw stats**: Python-evaluable list format for direct loading
|
231 |
+
|
232 |
+
### Reproducibility
|
233 |
+
- All analysis code available in the accompanying repository
|
234 |
+
- Deterministic random seeds used throughout data collection
|
235 |
+
- Version-controlled parameter extraction and fitting procedures
|
236 |
+
- Complete provenance tracking from raw model outputs to final parameters
|
237 |
+
|
238 |
+
## Related Work
|
239 |
+
|
240 |
+
This dataset complements existing research on:
|
241 |
+
- Transformer interpretability and mechanistic understanding
|
242 |
+
- Training dynamics and loss landscape analysis
|
243 |
+
- Activation pattern analysis in large language models
|
244 |
+
- Mathematical modeling of neural network behavior
|
245 |
+
|
246 |
+
## Acknowledgments
|
247 |
+
|
248 |
+
We thank the EleutherAI team for providing the Pythia model family and training checkpoints that made this analysis possible.
|
249 |
+
|
250 |
+
## Contact
|
251 |
+
|
252 |
+
For questions about the dataset or research, please contact [email protected].
|