Bochkov's picture
Update README.md
0391b0f verified
metadata
license: apache-2.0
tags:
  - bvv
  - frozen-embeddings
  - language-model
  - Chinese
  - English
  - conceptual-demo
  - toy-model
  - academic
model-index:
  - name: Bochkov/best_bvv_unfrozen_zh
    results:
      - task:
          type: text-generation
        metrics:
          - name: MMLU (average)
            type: mmlu
            value: 11.37

best_bvv_unfrozen_zh

Model Summary

best_bvv_unfrozen_zh is a 0.5B parameter causal Transformer language model trained on a minimal combined English-Chinese corpus with an open-vocabulary Unicode-based tokenizer (total 9B tokens, ~10% SFT/instruction mix).

  • Embedding layer is trainable (not frozen) for direct comparison with the frozen-embedding variants (best_bvv_zh).
  • Architecture: 16 transformer layers, 32 heads, rotary positional encoding.
  • Tokenizer: Custom Unicode-centric, with additional multi-character tokens.

This model is published to demonstrate the learnability of language models on minimal corpora for research, comparative and concept validation purposes only.
It is not a production-ready model and not intended for real-world information or safety-critical usage.


Key features

  • Non-frozen / standard embedding LM trained on exactly the same data and tokenization regime as the best_bvv_zh frozen-embedding model.
  • Provides a direct baseline for demonstrating the effects of frozen versus trainable token embeddings in large LMs.
  • Modest performance: Model is intentionally small in total parameters and trained on limited data, to facilitate transparent ablation and theoretical experiments.

Intended uses

  • As a baseline for concept/proof comparisons with frozen-embedding variants.
  • To benchmark how learnability, metric convergence, and MoE-fusion feasibility change between standard and frozen embedding regimes.
  • For research into modular LMs, tokenization strategies, and lightweight LM MoE approaches.

Limitations

  • Small data regime: Trained on only 9B tokens, with a significant fraction of SFT/instructions, so many advanced LM capabilities may be missing.
  • Not tuned for open information access, not a direct competitor to recent SOTA large LMs.
  • Model and tokenizer are for research, ablation, demonstration purposes only.

Metrics

Subset of evaluation results (see README for full breakdown):

  • MMLU (average): 14.0% ± 0.09% (σ=0.14%)
  • ARC-e: 19.74% ± 0.70% (σ=1.13%)
  • ARC-c: 25.02% ± 0.97% (σ=1.57%)
  • C-SENSE: 18.98% ± 0.56% (σ=0.90%)
  • SQUAD: 13.52% ± 0.75% (σ=1.21%)
  • BLEU (en-zh): 1.65% ± 0.32%; (zh-en): 5.93% ± 0.32%

🧑‍🔬 Citation & Concept

If you use or build upon this demo, please cite:

@misc{bochkov2025emergentsemanticstokenembeddings,
      title={Emergent Semantics Beyond Token Embeddings: Transformer LMs with Frozen Visual Unicode Representations}, 
      author={A. Bochkov},
      year={2025},
      eprint={2507.04886},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2507.04886}, 
}

@misc{bochkov2025growingtransformersmodularcomposition,
      title={Growing Transformers: Modular Composition and Layer-wise Expansion on a Frozen Substrate}, 
      author={A. Bochkov},
      year={2025},
      eprint={2507.07129},
      archivePrefix={arXiv},
      primaryClass={cs.LG},
      url={https://arxiv.org/abs/2507.07129}, 
}

This work demonstrates that transformer blocks, not token embeddings, carry the semantic burden in LLMs — a step toward modular, fusable, multilingual LMs.

Example Usage

from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model = AutoModelForCausalLM.from_pretrained('Bochkov/best_bvv_unfrozen_zh', trust_remote_code=True).to('cuda')
tokenizer = AutoTokenizer.from_pretrained('Bochkov/best_bvv_unfrozen_zh')
inputs = tokenizer("Hello! ", return_tensors="pt").to('cuda')
outputs = model.generate(
    **inputs, 
    max_new_tokens=100, 
    temperature=0.8, 
    top_k=50, 
    top_p=0.95, 
    do_sample=True
)
print(tokenizer.decode(outputs[0]))