Update README.md
Browse files
README.md
CHANGED
@@ -17,4 +17,5 @@ Pretrained T5 model with nanoT5:
|
|
17 |
- ~900m parameters, 16 layers in encoder, 32 layers in decoder
|
18 |
- sentencepiece tokenizer with 48k vocab & byte-pair fallback
|
19 |
- handles whitespaces etc correctly (unlike standard T5 tokenizer)
|
20 |
-
- 1024 ctx during pretrain
|
|
|
|
17 |
- ~900m parameters, 16 layers in encoder, 32 layers in decoder
|
18 |
- sentencepiece tokenizer with 48k vocab & byte-pair fallback
|
19 |
- handles whitespaces etc correctly (unlike standard T5 tokenizer)
|
20 |
+
- 1024 ctx during pretrain
|
21 |
+
- `relative_attention_num_buckets` increased to 48 from standard 32 for context length upscaling
|