bert_uncased_L-2_H-128_A-2_stsb
This model is a fine-tuned version of google/bert_uncased_L-2_H-128_A-2 on the GLUE STSB dataset. It achieves the following results on the evaluation set:
- Loss: 0.9185
- Pearson: 0.7914
- Spearmanr: 0.8078
- Combined Score: 0.7996
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 10
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 50
Training results
Training Loss | Epoch | Step | Validation Loss | Pearson | Spearmanr | Combined Score |
---|---|---|---|---|---|---|
8.7461 | 1.0 | 23 | 5.9052 | 0.2839 | 0.2249 | 0.2544 |
6.3503 | 2.0 | 46 | 4.1500 | 0.4510 | 0.3995 | 0.4253 |
4.6275 | 3.0 | 69 | 3.1621 | 0.5516 | 0.5407 | 0.5462 |
3.6391 | 4.0 | 92 | 2.6588 | 0.6168 | 0.6455 | 0.6311 |
3.0189 | 5.0 | 115 | 2.3421 | 0.6722 | 0.7192 | 0.6957 |
2.59 | 6.0 | 138 | 2.0467 | 0.6785 | 0.7346 | 0.7066 |
2.172 | 7.0 | 161 | 1.6885 | 0.6686 | 0.6274 | 0.6480 |
1.7948 | 8.0 | 184 | 1.4313 | 0.6900 | 0.6565 | 0.6733 |
1.5153 | 9.0 | 207 | 1.2854 | 0.7049 | 0.7020 | 0.7035 |
1.3213 | 10.0 | 230 | 1.1953 | 0.7136 | 0.7304 | 0.7220 |
1.1482 | 11.0 | 253 | 1.1937 | 0.7066 | 0.7005 | 0.7035 |
1.0318 | 12.0 | 276 | 1.0680 | 0.7379 | 0.7727 | 0.7553 |
0.9444 | 13.0 | 299 | 1.0875 | 0.7445 | 0.7877 | 0.7661 |
0.8957 | 14.0 | 322 | 1.0566 | 0.7515 | 0.7869 | 0.7692 |
0.8101 | 15.0 | 345 | 1.0417 | 0.7613 | 0.7947 | 0.7780 |
0.7743 | 16.0 | 368 | 0.9960 | 0.7708 | 0.7945 | 0.7827 |
0.7407 | 17.0 | 391 | 0.9344 | 0.7847 | 0.8062 | 0.7954 |
0.6842 | 18.0 | 414 | 0.9185 | 0.7914 | 0.8078 | 0.7996 |
0.6628 | 19.0 | 437 | 0.9989 | 0.7836 | 0.7979 | 0.7907 |
0.6402 | 20.0 | 460 | 0.9199 | 0.7952 | 0.8082 | 0.8017 |
0.6215 | 21.0 | 483 | 0.9276 | 0.7954 | 0.8100 | 0.8027 |
0.6069 | 22.0 | 506 | 0.9503 | 0.7956 | 0.8078 | 0.8017 |
0.6101 | 23.0 | 529 | 0.9789 | 0.7972 | 0.8134 | 0.8053 |
Framework versions
- Transformers 4.46.3
- Pytorch 2.2.1+cu118
- Datasets 2.17.0
- Tokenizers 0.20.3
- Downloads last month
- 103
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.
Model tree for gokulsrinivasagan/bert_uncased_L-2_H-128_A-2_stsb
Base model
google/bert_uncased_L-2_H-128_A-2