Tokenizers
This collection features frozen, precomputed token embedding tensors designed for experimentation with semantic emergence in language models.
Updated • 11Note Plane 0 (0–65535): All single Unicode code points (monograms) are mapped 1:1 to token codes, directly matching standard Unicode BMP. Private and unused code ranges (Plane 0, e.g., 0xE000–0xF8FF): All multi-character tokens (bigrams, trigrams) are placed exclusively in these ranges. This design achieves total, lossless Unicode text coverage, with all multi-symbol tokens isolated above the core Unicode range. Data-driven bigrams and trigrams from Wikipedia, Vocabulary size: 65,536 tokens
Bochkov/bvv241-max
Updated • 12Note Plane 0 (0–65535): All single Unicode code points (monograms) are mapped 1:1 to token codes, directly matching standard Unicode BMP. Tokenizer created from the intersection of token text across leading SOTA models Includes o200k_base, cl100k_base, Mistral-Nemo, QwQ-32B, DeepSeek-R1, Qwen3-32B vocabularies (placed in private and unused code ranges (Plane 0 high + supplementary, e.g., 0xE000–0xF8FF and 65536–131071) Vocabulary size: 131,072 tokens, Embedding dimension: 1024.
Bochkov/bvv241-nemo
Updated • 8Note Plane 0 (0–65535): All single Unicode code points (monograms) are mapped 1:1 to token codes, directly matching standard Unicode BMP. Private and unused code ranges (Plane 0 high + supplementary, e.g., 0xE000–0xF8FF and 65536–131071): All multi-character tokens (bigrams, trigrams, SOTA model token strings) are placed exclusively in these ranges. This design achieves total, lossless Unicode text coverage, with all multi-symbol tokens isolated above the core Unicode range.
Bochkov/bvv241-abs
Updated • 11Note Unified Unicode Tokenizer (SOTA Intersection) with Frozen Embeddings and Extended Vector Dim (4096) Plane 0 (0–65535): All single Unicode code points (monograms) are mapped 1:1 to token codes, directly matching standard Unicode BMP. Private and unused code ranges (Plane 0 high + supplementary, e.g., 0xE000–0xF8FF and 65536–131071): All multi-character tokens (bigrams, trigrams, SOTA model token strings) are placed exclusively in these ranges.
Emergent Semantics Beyond Token Embeddings: Transformer LMs with Frozen Visual Unicode Representations
Paper • 2507.04886 • Published • 1Growing Transformers: Modular Composition and Layer-wise Expansion on a Frozen Substrate
Paper • 2507.07129 • Published • 2