Best demo models [pretrain]
Collection
Frozen embedding LMs (en/ru/zh) & their MoE fusion. Baselines: frozen vs unfrozen embedding ablation.
β’
7 items
β’
Updated
best_bvv_unfrozen_zh is a 0.5B parameter causal Transformer language model trained on a minimal combined English-Chinese corpus with an open-vocabulary Unicode-based tokenizer (total 9B tokens, ~10% SFT/instruction mix).
best_bvv_zh
).This model is published to demonstrate the learnability of language models on minimal corpora for research, comparative and concept validation purposes only.
It is not a production-ready model and not intended for real-world information or safety-critical usage.
best_bvv_zh
frozen-embedding model.Subset of evaluation results (see README for full breakdown):
If you use or build upon this demo, please cite:
@misc{bochkov2025emergentsemanticstokenembeddings,
title={Emergent Semantics Beyond Token Embeddings: Transformer LMs with Frozen Visual Unicode Representations},
author={A. Bochkov},
year={2025},
eprint={2507.04886},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2507.04886},
}
@misc{bochkov2025growingtransformersmodularcomposition,
title={Growing Transformers: Modular Composition and Layer-wise Expansion on a Frozen Substrate},
author={A. Bochkov},
year={2025},
eprint={2507.07129},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2507.07129},
}
This work demonstrates that transformer blocks, not token embeddings, carry the semantic burden in LLMs β a step toward modular, fusable, multilingual LMs.
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model = AutoModelForCausalLM.from_pretrained('Bochkov/best_bvv_unfrozen_zh', trust_remote_code=True).to('cuda')
tokenizer = AutoTokenizer.from_pretrained('Bochkov/best_bvv_unfrozen_zh')
inputs = tokenizer("Hello! ", return_tensors="pt").to('cuda')
outputs = model.generate(
**inputs,
max_new_tokens=100,
temperature=0.8,
top_k=50,
top_p=0.95,
do_sample=True
)
print(tokenizer.decode(outputs[0]))