pszemraj commited on
Commit
1fd7fb6
·
verified ·
1 Parent(s): 1cb1f1a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -1
README.md CHANGED
@@ -17,4 +17,5 @@ Pretrained T5 model with nanoT5:
17
  - ~900m parameters, 16 layers in encoder, 32 layers in decoder
18
  - sentencepiece tokenizer with 48k vocab & byte-pair fallback
19
  - handles whitespaces etc correctly (unlike standard T5 tokenizer)
20
- - 1024 ctx during pretrain
 
 
17
  - ~900m parameters, 16 layers in encoder, 32 layers in decoder
18
  - sentencepiece tokenizer with 48k vocab & byte-pair fallback
19
  - handles whitespaces etc correctly (unlike standard T5 tokenizer)
20
+ - 1024 ctx during pretrain
21
+ - `relative_attention_num_buckets` increased to 48 from standard 32 for context length upscaling