File size: 432 Bytes
1cb1f1a |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 |
---
license: apache-2.0
datasets:
- HuggingFaceTB/smollm-corpus
language:
- en
pipeline_tag: text2text-generation
library_name: transformers
---
# tFINE-900m-e16-d32-1024ctx
Pretrained T5 model with nanoT5:
- ~900m parameters, 16 layers in encoder, 32 layers in decoder
- sentencepiece tokenizer with 48k vocab & byte-pair fallback
- handles whitespaces etc correctly (unlike standard T5 tokenizer)
- 1024 ctx during pretrain |