diff --git "a/data/training_a_sparse_autoencoder.ipynb" "b/data/training_a_sparse_autoencoder.ipynb" deleted file mode 100644--- "a/data/training_a_sparse_autoencoder.ipynb" +++ /dev/null @@ -1,1087 +0,0 @@ -{ - "cells": [ - { - "cell_type": "markdown", - "metadata": { - "jp-MarkdownHeadingCollapsed": true - }, - "source": [ - "## Prepare the dataset" - ] - }, - { - "cell_type": "code", - "execution_count": 16, - "metadata": {}, - "outputs": [ - { - "data": { - "application/vnd.jupyter.widget-view+json": { - "model_id": "142a91ce5d274b87a29c35d9871d1920", - "version_major": 2, - "version_minor": 0 - }, - "text/plain": [ - "Resolving data files: 0%| | 0/79 [00:00', 'Once', ' upon', ' a', ' time', ',', ' there', ' was', ' a', ' little', ' girl', ' named', ' Lily', '.', ' She', ' lived', ' in', ' a', ' big', ',', ' happy', ' little', ' town', '.', ' On', ' her', ' big', ' adventure', ',']\n", - "Tokenized answer: [' Lily']\n" - ] - }, - { - "data": { - "text/html": [ - "
Performance on answer token:\n",
-       "Rank: 1        Logit: 26.94 Prob: 43.80% Token: | Lily|\n",
-       "
\n" - ], - "text/plain": [ - "Performance on answer token:\n", - "\u001b[1mRank: \u001b[0m\u001b[1;36m1\u001b[0m\u001b[1m Logit: \u001b[0m\u001b[1;36m26.94\u001b[0m\u001b[1m Prob: \u001b[0m\u001b[1;36m43.80\u001b[0m\u001b[1m% Token: | Lily|\u001b[0m\n" - ] - }, - "metadata": {}, - "output_type": "display_data" - }, - { - "name": "stdout", - "output_type": "stream", - "text": [ - "Top 0th token. Logit: 27.16 Prob: 54.49% Token: | she|\n", - "Top 1th token. Logit: 26.94 Prob: 43.80% Token: | Lily|\n", - "Top 2th token. Logit: 21.41 Prob: 0.17% Token: | the|\n", - "Top 3th token. Logit: 21.38 Prob: 0.17% Token: | which|\n", - "Top 4th token. Logit: 21.38 Prob: 0.17% Token: | her|\n", - "Top 5th token. Logit: 21.31 Prob: 0.16% Token: | a|\n", - "Top 6th token. Logit: 20.73 Prob: 0.09% Token: | in|\n", - "Top 7th token. Logit: 20.62 Prob: 0.08% Token: | one|\n", - "Top 8th token. Logit: 19.86 Prob: 0.04% Token: | when|\n", - "Top 9th token. Logit: 19.84 Prob: 0.04% Token: | surrounded|\n" - ] - }, - { - "data": { - "text/html": [ - "
Ranks of the answer tokens: [(' Lily', 1)]\n",
-       "
\n" - ], - "text/plain": [ - "\u001b[1mRanks of the answer tokens:\u001b[0m \u001b[1m[\u001b[0m\u001b[1m(\u001b[0m\u001b[32m' Lily'\u001b[0m, \u001b[1;36m1\u001b[0m\u001b[1m)\u001b[0m\u001b[1m]\u001b[0m\n" - ] - }, - "metadata": {}, - "output_type": "display_data" - } - ], - "source": [ - "from transformer_lens.utils import test_prompt\n", - "\n", - "# Test the model with a prompt\n", - "test_prompt(\n", - " \"Once upon a time, there was a little girl named Lily. She lived in a big, happy little town. On her big adventure,\",\n", - " \" Lily\",\n", - " model,\n", - " prepend_space_to_answer=False,\n", - ")" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "jGzOvReDOVHv" - }, - "source": [ - "In the output above, we see that the model assigns ~ 70% probability to \"she\" being the next token, and a 13% chance to \" Lily\" being the next token. Other names like Lucy or Anna are not highly ranked." - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "QH8YOZOzOVHv" - }, - "source": [ - "### Exploring Model Capabilities with Log Probs" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "50mqTBihOVHw" - }, - "source": [ - "Looking at token ranking for a single prompt is interesting, but a much higher through way to understand models is to look at token log probs for all tokens in text. We can use the `circuits_vis` package to get a nice visualization where we can see tokenization, and hover to get the top5 tokens by log probability. Darker tokens are tokens where the model assigned a higher probability to the actual next token." - ] - }, - { - "cell_type": "code", - "execution_count": 10, - "metadata": { - "id": "Tic0RCUpOVHw" - }, - "outputs": [ - { - "data": { - "text/html": [ - "
\n", - " " - ], - "text/plain": [ - "" - ] - }, - "execution_count": 10, - "metadata": {}, - "output_type": "execute_result" - } - ], - "source": [ - "import circuitsvis as cv # optional dep, install with pip install circuitsvis\n", - "\n", - "# Let's make a longer prompt and see the log probabilities of the tokens\n", - "example_prompt = \"\"\"Hi, how are you doing this? I'm really enjoying your posts\"\"\"\n", - "logits, cache = model.run_with_cache(example_prompt)\n", - "cv.logits.token_log_probs(\n", - " model.to_tokens(example_prompt),\n", - " model(example_prompt)[0].log_softmax(dim=-1),\n", - " model.to_string,\n", - ")\n", - "# hover on the output to see the result." - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "lhGIl3YbOVHw" - }, - "source": [ - "Let's combine `model.generate` and the token log probs visualization to see the log probs on text generated by the model. Note that we can play with the temperature and this should sample less likely trajectories according to the model. I've increased the maximum number of tokens in order to get a full story.\n", - "\n", - "Some things to explore:\n", - "- Which tokens does the model assign high probability to? Can you see how the model should know which word comes next?\n", - "- What happens if you increase / decrease the temperature?\n", - "- Do the rankings of tokens seem sensible to you? What about where the model doesn't assign a high probability to the token which came next?" - ] - }, - { - "cell_type": "code", - "execution_count": 11, - "metadata": { - "id": "Nikp2ASlOVHw" - }, - "outputs": [ - { - "data": { - "application/vnd.jupyter.widget-view+json": { - "model_id": "824be3d7fc174cdb8156753fc5b8667b", - "version_major": 2, - "version_minor": 0 - }, - "text/plain": [ - " 0%| | 0/200 [00:00\n", - " " - ], - "text/plain": [ - "" - ] - }, - "execution_count": 11, - "metadata": {}, - "output_type": "execute_result" - } - ], - "source": [ - "example_prompt = model.generate(\n", - " \"Once upon a time\",\n", - " stop_at_eos=False, # avoids a bug on MPS\n", - " temperature=1,\n", - " verbose=True,\n", - " max_new_tokens=200,\n", - ")\n", - "logits, cache = model.run_with_cache(example_prompt)\n", - "cv.logits.token_log_probs(\n", - " model.to_tokens(example_prompt),\n", - " model(example_prompt)[0].log_softmax(dim=-1),\n", - " model.to_string,\n", - ")" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Lets do the same exploration for telugu text and see how model understands it." - ] - }, - { - "cell_type": "code", - "execution_count": 12, - "metadata": {}, - "outputs": [ - { - "data": { - "application/vnd.jupyter.widget-view+json": { - "model_id": "f8e2e3c89df84ce0b8a3025d179eb778", - "version_major": 2, - "version_minor": 0 - }, - "text/plain": [ - " 0%| | 0/200 [00:00\n", - " " - ], - "text/plain": [ - "" - ] - }, - "execution_count": 12, - "metadata": {}, - "output_type": "execute_result" - } - ], - "source": [ - "example_prompt = model.generate(\n", - " \" Translate this telugu text into English: \\\n", - " పిల్లి లేదా మార్జాలం (ఆంగ్లం: Cat) కార్నివోరా క్రమానికి చెందిన చిన్న క్షీరదము. \\\n", - " దీనిని పెంపుడు పిల్లి అని కూడా అంటారు. వీనిని మానవులు పురాతన కాలం నుండి సుమారు 9,500 సంవత్సరాలుగా పెంచుకుంటున్నారు. \\\n", - " \",\n", - " stop_at_eos=False, # avoids a bug on MPS\n", - " temperature=0.7,\n", - " verbose=True,\n", - " max_new_tokens=200,\n", - ")\n", - "logits, cache = model.run_with_cache(example_prompt)\n", - "cv.logits.token_log_probs(\n", - " model.to_tokens(example_prompt),\n", - " model(example_prompt)[0].log_softmax(dim=-1),\n", - " model.to_string,\n", - ")" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "The models tokenization is pretty bad in telugu due to the tokenization of telugu words, Unlike English Telugu's vowels need to be separate tokenizers inside each word. You can consider 'Cat' as single token in english but here in telugu 'పిల్లి' which has tokens ''ప', 'ి', 'ల్ల', 'ి' should be considered as separate due to how telugu is written as each letter is mix of telugu consonant and vowels. This is the reason why the model's interpretibility is not good in telugu.
" - ] - }, - { - "cell_type": "code", - "execution_count": 13, - "metadata": {}, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "['ప', 'ి', 'ల్ల', 'ి']\n" - ] - } - ], - "source": [ - "# Custom Code to get the Telugu tokens from a given text\n", - "import re\n", - "\n", - "# List of Telugu vowels\n", - "telugu_vowels = ['అ', 'ఆ', 'ఇ', 'ఈ', 'ఉ', 'ఊ', 'ఋ', 'ఎ', 'ఏ', 'ఐ', 'ఒ', 'ఓ', 'ఔ', 'ా', 'ి', 'ీ', 'ు', 'ూ', 'ృ', 'ె', 'ే', 'ై', 'ొ', 'ో', 'ౌ']\n", - "\n", - "def custom_tokenizer(text):\n", - " # Create a regex pattern to match Telugu vowels\n", - " pattern = '|'.join(map(re.escape, telugu_vowels))\n", - " # Split the text based on the pattern\n", - " tokens = re.split(f'({pattern})', text)\n", - " # Filter out empty tokens\n", - " tokens = [token for token in tokens if token]\n", - " return tokens\n", - "\n", - "# Example usage\n", - "text = \"పిల్లి\"\n", - "tokens = custom_tokenizer(text)\n", - "print(tokens) # Output: ['ప', 'ి', 'ల', '్', 'ల', 'ి']" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "er3H1TDoOVHw" - }, - "source": [ - "# Training an SAE\n", - "\n", - "Now we're ready to train out SAE. We'll make a runner config, instantiate the runner and the rest is taken care of for us!\n", - "\n", - "During training, you use weights and biases to check key metrics which indicate how well we are able to optimize the variables we care about.\n", - "\n", - "To get a better sense of which variables to look at, you can read my (Joseph's) post [here](https://www.lesswrong.com/posts/f9EgfLSurAiqRJySD/open-source-sparse-autoencoders-for-all-residual-stream) and especially look at my weights and biases report [here](https://links-cdn.wandb.ai/wandb-public-images/links/jbloom/uue9i416.html).\n", - "\n", - "A few tips:\n", - "- Feel free to reorganize your wandb dashboard to put L0, CE_Loss_score, explained variance and other key metrics in one section at the top.\n", - "- Make a [run comparer](https://docs.wandb.ai/guides/app/features/panels/run-comparer) when tuning hyperparameters.\n", - "- You can download the resulting sparse autoencoder / sparsity estimate from wandb and upload them to huggingface if you want to share your SAE with other.\n", - " - cfg.json (training config)\n", - " - sae_weight.safetensors (model weights)\n", - " - sparsity.safetensors (sparsity estimate)" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "jCHtPycOOVHw" - }, - "source": [ - "## MLP Out\n", - "\n", - "I've tuned the hyperparameters below for a decent SAE which achieves 86% CE Loss recovered and an L0 of ~85, and runs in about 2 hours on an M3 Max. You can get an SAE that looks better faster if you only consider L0 and CE loss but it will likely have more dense features and more dead features. Here's a link to my output with two runs with two different L1's: https://wandb.ai/jbloom/sae_lens_tutorial ." - ] - }, - { - "cell_type": "code", - "execution_count": 14, - "metadata": { - "id": "oAsZCAdJOVHw", - "scrolled": true - }, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "Run name: 36864-L1-5-LR-5e-05-Tokens-8.000e+06\n", - "n_tokens_per_buffer (millions): 1.048576\n", - "Lower bound: n_contexts_per_buffer (millions): 0.001024\n", - "Total training steps: 488\n", - "Total wandb updates: 16\n", - "n_tokens_per_feature_sampling_window (millions): 16777.216\n", - "n_tokens_per_dead_feature_window (millions): 16777.216\n", - "We will reset the sparsity calculation 0 times.\n", - "Number tokens in sparsity calculation window: 1.64e+07\n", - "MODEL LOADING:\n", - "Setting model device to cuda for d_devices\n", - "Will use cuda:0 to cuda:3\n", - "-------------\n" - ] - }, - { - "data": { - "application/vnd.jupyter.widget-view+json": { - "model_id": "f3268894f31c42c7a94829f0244f72af", - "version_major": 2, - "version_minor": 0 - }, - "text/plain": [ - "Loading checkpoint shards: 0%| | 0/2 [00:00" - ] - }, - "metadata": {}, - "output_type": "display_data" - }, - { - "data": { - "text/html": [ - "W&B syncing is set to `offline` in this directory.
Run `wandb online` or set WANDB_MODE=online to enable cloud syncing." - ], - "text/plain": [ - "" - ] - }, - "metadata": {}, - "output_type": "display_data" - }, - { - "name": "stderr", - "output_type": "stream", - "text": [ - "Training SAE: 0%| | 0/8000000 [00:00" - ] - }, - "metadata": {}, - "output_type": "display_data" - }, - { - "data": { - "text/html": [ - "

Run history:


details/current_l1_coefficient▁▁▂▂▃▃▄▄▅▅▆▆▇▇██
details/current_learning_rate██▇▇▆▆▅▅▄▄▃▃▂▂▁▁
details/n_training_tokens▁▁▂▂▃▃▄▄▅▅▆▆▇▇██
losses/l1_loss▁▁▂▂▂▄▅▆▇▇▇█████
losses/mse_loss█▄▃▂▂▂▁▁▁▁▁▁▁▁▁▁
losses/overall_loss█▄▃▂▂▂▁▁▁▁▁▁▁▁▁▁
losses/raw_l1_loss▁▁▁▁▁▂▃▄▄▅▅▆▇▇▇█
metrics/explained_variance▁▂▃▄▅▆▇▇▇▇██████
metrics/explained_variance_std█▄▃▃▄▄▃▃▂▂▂▂▁▁▁▁
metrics/l0▁▁▂▂▃▄▆▆▇▇▇█████
sparsity/dead_features▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁
sparsity/mean_passes_since_fired▁▂▂▁▁▃▃▅▅▄▆█▄▅▃▃

Run summary:


details/current_l1_coefficient0.048
details/current_learning_rate5e-05
details/n_training_tokens7864320
losses/l1_loss1083.33333
losses/mse_loss144
losses/overall_loss196
losses/raw_l1_loss52
metrics/explained_variance0.91016
metrics/explained_variance_std0.03345
metrics/l011277.56641
sparsity/dead_features0
sparsity/mean_passes_since_fired0.19393

" - ], - "text/plain": [ - "" - ] - }, - "metadata": {}, - "output_type": "display_data" - }, - { - "data": { - "text/html": [ - "You can sync this run to the cloud by running:
wandb sync /data1/max/telugu_corpus/andhrajyothy_data/wandb/offline-run-20250112_055552-1jc5it4x" - ], - "text/plain": [ - "" - ] - }, - "metadata": {}, - "output_type": "display_data" - }, - { - "data": { - "text/html": [ - "Find logs at: ./wandb/offline-run-20250112_055552-1jc5it4x/logs" - ], - "text/plain": [ - "" - ] - }, - "metadata": {}, - "output_type": "display_data" - } - ], - "source": [ - "# clear CUDA\n", - "torch.cuda.empty_cache()\n", - "\n", - "total_training_steps = 1_000_000 # probably we should do more\n", - "batch_size = 8\n", - "total_training_tokens = total_training_steps * batch_size\n", - "\n", - "lr_warm_up_steps = 0\n", - "lr_decay_steps = total_training_steps // 5 # 20% of training\n", - "l1_warm_up_steps = total_training_steps // 20 # 5% of training\n", - "\n", - "# FIX to load dataset from the disk update line number 198 to 207 in package sae_lens/training/activations_store.py with following code\n", - "\n", - "# self.dataset = (\n", - "# datasets.load_from_disk(\n", - "# dataset,\n", - "# )\n", - "# if isinstance(dataset, str)\n", - "# else dataset\n", - "# )\n", - "\n", - "cfg = LanguageModelSAERunnerConfig(\n", - " # Data Generating Function (Model + Training Distibuion)\n", - " model_name=\"google/gemma-2-2b-it\", # model (more options here: https://neelnanda-io.github.io/TransformerLens/generated/model_properties_table.html)\n", - " hook_name=\"blocks.8.hook_resid_post\", # A valid hook point (see more details here: https://neelnanda-io.github.io/TransformerLens/generated/demos/Main_Demo.html#Hook-Points)\n", - " hook_layer=8, # Only one layer in the model.\n", - " d_in=2304, #1024*2, # the width of output Note: different for each model \n", - " dataset_path='ai4bharat_tel_dataset_tokenized', #\"apollo-research/roneneldan-TinyStories-tokenizer-gpt2\", # this is a tokenized language dataset on Huggingface for the Tiny Stories corpus.\n", - " is_dataset_tokenized=True,\n", - " streaming=True, # we could pre-download the token dataset if it was small.\n", - " # SAE Parameters\n", - " mse_loss_normalization=None, # We won't normalize the mse loss,\n", - " expansion_factor=16, # the width of the SAE. Larger will result in better stats but slower training.\n", - " b_dec_init_method=\"zeros\", # The geometric median can be used to initialize the decoder weights.\n", - " apply_b_dec_to_input=False, # We won't apply the decoder weights to the input.\n", - " normalize_sae_decoder=False,\n", - " scale_sparsity_penalty_by_decoder_norm=True,\n", - " decoder_heuristic_init=True,\n", - " init_encoder_as_decoder_transpose=True,\n", - " normalize_activations=\"expected_average_only_in\",\n", - " # Training Parameters\n", - " lr=5e-5, # lower the better, we'll go fairly high to speed up the training.\n", - " adam_beta1=0.9, # adam params (default, but once upon a time we experimented with these.)\n", - " adam_beta2=0.999,\n", - " lr_scheduler_name=\"constant\", # constant learning rate with warmup. Could be better schedules out there.\n", - " lr_warm_up_steps=lr_warm_up_steps, # this can help avoid too many dead features initially.\n", - " lr_decay_steps=lr_decay_steps, # this will help us avoid overfitting.\n", - " l1_coefficient=5, # will control how sparse the feature activations are\n", - " l1_warm_up_steps=l1_warm_up_steps, # this can help avoid too many dead features initially.\n", - " lp_norm=1.0, # the L1 penalty (and not a Lp for p < 1)\n", - " # train_batch_size_tokens=batch_size,\n", - " context_size=1024, # will control the lenght of the prompts we feed to the model. Larger is better but slower. so for this we'll use a short one.\n", - " # Activation Store Parameters\n", - " n_batches_in_buffer=64, # controls how many activations we store / shuffle.\n", - " training_tokens=total_training_tokens, # 100 million tokens is quite a few, but we want to see good stats. Get a coffee, come back.\n", - " store_batch_size_prompts=16,\n", - " # Resampling protocol\n", - " use_ghost_grads=False, # we don't use ghost grads anymore.\n", - " feature_sampling_window=1000, # this controls our reporting of feature sparsity stats\n", - " dead_feature_window=1000, # would effect resampling or ghost grads if we were using it.\n", - " dead_feature_threshold=1e-4, # would effect resampling or ghost grads if we were using it.\n", - " # WANDB\n", - " log_to_wandb=True, # always use wandb unless you are just testing code.\n", - " wandb_project=\"sae_lens_tutorial\",\n", - " wandb_log_frequency=30,\n", - " eval_every_n_wandb_logs=20,\n", - " # Misc\n", - " device='cuda',\n", - " seed=42,\n", - " n_checkpoints=1,\n", - " checkpoint_path=\"checkpoints\",\n", - " dtype='torch.bfloat16',#\"float32\",\n", - " model_from_pretrained_kwargs={\"n_devices\": 4}, # Number of GPUS for training\n", - " act_store_device = 'cuda:3',\n", - " compile_sae=False, # Performance Enhancement\n", - " train_batch_size_tokens=4096*4,\n", - ")\n", - "# look at the next cell to see some instruction for what to do while this is running.\n", - "sparse_autoencoder = SAETrainingRunner(cfg).run()" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "khR_QkAJOVHw" - }, - "source": [ - "# TO DO: Understanding Telugu dataset with our SAE\n", - "\n", - "I haven't had time yet to complete this section, but I'd love to see a PR where someones uses an SAE they trained in this tutorial to understand this model better." - ] - } - ], - "metadata": { - "accelerator": "GPU", - "colab": { - "gpuType": "T4", - "provenance": [] - }, - "kernelspec": { - "display_name": "venv1", - "language": "python", - "name": "python3" - }, - "language_info": { - "codemirror_mode": { - "name": "ipython", - "version": 3 - }, - "file_extension": ".py", - "mimetype": "text/x-python", - "name": "python", - "nbconvert_exporter": "python", - "pygments_lexer": "ipython3", - "version": "3.10.2" - } - }, - "nbformat": 4, - "nbformat_minor": 4 -}