Bochkov commited on
Commit
871fd9a
·
verified ·
1 Parent(s): f7dbc8f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -4
README.md CHANGED
@@ -14,8 +14,10 @@ library_name: transformers
14
 
15
  # pro_bvv_unfrozen: 200M baseline LM (non-frozen embeddings)
16
 
17
- This model is described in the paper [Growing Transformers: Modular Composition and Layer-wise Expansion on a Frozen Substrate](https://huggingface.co/papers/2507.07129).
18
- Code: https://github.com/Bochkov/BVV241
 
 
19
 
20
  **Description**
21
 
@@ -39,7 +41,6 @@ This is a baseline English language model (200M parameters) trained in the **cla
39
 
40
 
41
  ---
42
-
43
  ⚠️ Limitations
44
  Research use only.
45
  Trained on a small subset.
@@ -60,7 +61,6 @@ If you use this model or the underlying concepts in your research, please cite o
60
  primaryClass={cs.CL},
61
  url={https://arxiv.org/abs/2507.04886},
62
  }
63
-
64
  @misc{bochkov2025growingtransformersmodularcomposition,
65
  title={Growing Transformers: Modular Composition and Layer-wise Expansion on a Frozen Substrate},
66
  author={A. Bochkov},
 
14
 
15
  # pro_bvv_unfrozen: 200M baseline LM (non-frozen embeddings)
16
 
17
+ This repository contains the model and associated resources from the papers
18
+ [📚 Paper (Emergent Semantics Beyond Token Embeddings: Transformer LMs with Frozen Visual Unicode Representations)](https://huggingface.co/papers/2507.04886) -
19
+ [📚 Paper (Growing Transformers: Modular Composition and Layer-wise Expansion on a Frozen Substrate)](https://huggingface.co/papers/2507.07129) -
20
+ [💻 Code](https://github.com/AVBochkov/Embeddings)
21
 
22
  **Description**
23
 
 
41
 
42
 
43
  ---
 
44
  ⚠️ Limitations
45
  Research use only.
46
  Trained on a small subset.
 
61
  primaryClass={cs.CL},
62
  url={https://arxiv.org/abs/2507.04886},
63
  }
 
64
  @misc{bochkov2025growingtransformersmodularcomposition,
65
  title={Growing Transformers: Modular Composition and Layer-wise Expansion on a Frozen Substrate},
66
  author={A. Bochkov},