BERT_V8_sp10_lw40_ex100_lo50_k10_k10_fold4
This model is a fine-tuned version of bert-base-uncased on the None dataset. It achieves the following results on the evaluation set:
- Loss: 0.5427
- Qwk: 0.5669
- Mse: 0.5427
- Rmse: 0.7367
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 100
Training results
Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
---|---|---|---|---|---|---|
No log | 1.0 | 5 | 6.7078 | 0.0 | 6.7078 | 2.5900 |
No log | 2.0 | 10 | 4.4067 | 0.0079 | 4.4067 | 2.0992 |
No log | 3.0 | 15 | 2.4365 | 0.0175 | 2.4365 | 1.5609 |
No log | 4.0 | 20 | 1.3072 | 0.0107 | 1.3072 | 1.1433 |
No log | 5.0 | 25 | 0.8520 | 0.3982 | 0.8520 | 0.9230 |
No log | 6.0 | 30 | 0.6719 | 0.3895 | 0.6719 | 0.8197 |
No log | 7.0 | 35 | 0.9309 | 0.2902 | 0.9309 | 0.9648 |
No log | 8.0 | 40 | 0.7205 | 0.4966 | 0.7205 | 0.8488 |
No log | 9.0 | 45 | 0.7712 | 0.4634 | 0.7712 | 0.8782 |
No log | 10.0 | 50 | 0.8108 | 0.4906 | 0.8108 | 0.9004 |
No log | 11.0 | 55 | 0.5563 | 0.5587 | 0.5563 | 0.7458 |
No log | 12.0 | 60 | 0.5058 | 0.5962 | 0.5058 | 0.7112 |
No log | 13.0 | 65 | 0.5596 | 0.6458 | 0.5596 | 0.7481 |
No log | 14.0 | 70 | 0.5800 | 0.6231 | 0.5800 | 0.7616 |
No log | 15.0 | 75 | 0.5088 | 0.5849 | 0.5088 | 0.7133 |
No log | 16.0 | 80 | 0.5191 | 0.6323 | 0.5191 | 0.7205 |
No log | 17.0 | 85 | 0.5390 | 0.5711 | 0.5390 | 0.7342 |
No log | 18.0 | 90 | 0.5895 | 0.6454 | 0.5895 | 0.7678 |
No log | 19.0 | 95 | 0.5398 | 0.6112 | 0.5398 | 0.7347 |
No log | 20.0 | 100 | 0.5523 | 0.5777 | 0.5523 | 0.7432 |
No log | 21.0 | 105 | 0.7372 | 0.5103 | 0.7372 | 0.8586 |
No log | 22.0 | 110 | 0.6965 | 0.5279 | 0.6965 | 0.8346 |
No log | 23.0 | 115 | 0.5263 | 0.5886 | 0.5263 | 0.7255 |
No log | 24.0 | 120 | 0.5104 | 0.5909 | 0.5104 | 0.7144 |
No log | 25.0 | 125 | 0.5223 | 0.5781 | 0.5223 | 0.7227 |
No log | 26.0 | 130 | 0.5991 | 0.5468 | 0.5991 | 0.7740 |
No log | 27.0 | 135 | 0.5744 | 0.5574 | 0.5744 | 0.7579 |
No log | 28.0 | 140 | 0.5720 | 0.5672 | 0.5720 | 0.7563 |
No log | 29.0 | 145 | 0.5213 | 0.5593 | 0.5213 | 0.7220 |
No log | 30.0 | 150 | 0.6727 | 0.5252 | 0.6727 | 0.8202 |
No log | 31.0 | 155 | 0.5432 | 0.5692 | 0.5432 | 0.7370 |
No log | 32.0 | 160 | 0.5245 | 0.5905 | 0.5245 | 0.7242 |
No log | 33.0 | 165 | 0.5201 | 0.5338 | 0.5201 | 0.7212 |
No log | 34.0 | 170 | 0.5244 | 0.5561 | 0.5244 | 0.7242 |
No log | 35.0 | 175 | 0.5202 | 0.5556 | 0.5202 | 0.7212 |
No log | 36.0 | 180 | 0.5320 | 0.5544 | 0.5320 | 0.7294 |
No log | 37.0 | 185 | 0.5401 | 0.5909 | 0.5401 | 0.7349 |
No log | 38.0 | 190 | 0.6913 | 0.5194 | 0.6913 | 0.8314 |
No log | 39.0 | 195 | 0.5447 | 0.5519 | 0.5447 | 0.7380 |
No log | 40.0 | 200 | 0.5087 | 0.5540 | 0.5087 | 0.7132 |
No log | 41.0 | 205 | 0.5323 | 0.5580 | 0.5323 | 0.7296 |
No log | 42.0 | 210 | 0.5400 | 0.5569 | 0.5400 | 0.7349 |
No log | 43.0 | 215 | 0.5427 | 0.5669 | 0.5427 | 0.7367 |
Framework versions
- Transformers 4.51.1
- Pytorch 2.5.1+cu124
- Datasets 3.5.0
- Tokenizers 0.21.0
- Downloads last month
- 4
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for genki10/BERT_V8_sp10_lw40_ex100_lo50_k10_k10_fold4
Base model
google-bert/bert-base-uncased