File size: 1,219 Bytes
4674ebf 2a4dd69 5fc521e 2a4dd69 ea92143 2a4dd69 4674ebf 2a4dd69 2900f79 dcdfa6e 2a4dd69 8c26c61 76d6533 ea92143 26c6519 a85c7e6 26c6519 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 |
---
language:
- en
tags:
- pytorch
- text-generation
- causal-lm
- rwkv
license: apache-2.0
datasets:
- the_pile
---
# RWKV-4 169M
# Use RWKV-4 models (NOT RWKV-4a, NOT RWKV-4b) unless you know what you are doing.
# Use RWKV-4 models (NOT RWKV-4a, NOT RWKV-4b) unless you know what you are doing.
# Use RWKV-4 models (NOT RWKV-4a, NOT RWKV-4b) unless you know what you are doing.
## Model Description
RWKV-4 169M is a L12-D768 causal language model trained on the Pile. See https://github.com/BlinkDL/RWKV-LM for details.
Use https://github.com/BlinkDL/ChatRWKV to run it.
ctx_len = 1024
n_layer = 12
n_embd = 768
Final checkpoint:
RWKV-4-Pile-169M-20220807-8023.pth : Trained on the Pile for 332B tokens.
* Pile loss 2.5355
* LAMBADA ppl 29.33, acc 32.99%
* PIQA acc 65.07%
* SC2016 acc 58.79%
* Hellaswag acc_norm 32.26%
With tiny attention (--tiny_att_dim 256 --tiny_att_layer 9):
RWKV-4a-Pile-170M-20221209-7955.pth
* Pile loss 2.4702
* LAMBADA ppl 21.42, acc 38.23%
* PIQA acc 63.76%
* SC2016 acc 59.06%
* Hellaswag acc_norm 32.40%
RWKV-4b-Pile-171M-20230202-7922.pth (--my_testing 'a')
* Pile loss 2.4222
* LAMBADA ppl 22.02, acc 38.56%
* PIQA acc 64.04%
* SC2016 acc 59.91%
* Hellaswag acc_norm 33.33%
|