t5-v1_1-small pretrained with mlm task on

• kbd (custom latin script) 835K lines: a pile of scraped text from news sites, books etc.

• ru 3M lines: wiki corpus from OPUS

tokenizer: sentencepiece unigram, 8K, shared vocabulary

Downloads last month
8
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Dataset used to train anzorq/kbd_lat-835k_ru-3M_t5-small