BERT_V8_sp10_lw20_ex200_lo100_k3_k3_fold2
This model is a fine-tuned version of bert-base-uncased on the None dataset. It achieves the following results on the evaluation set:
- Loss: 1.0056
- Qwk: 0.3282
- Mse: 1.0055
- Rmse: 1.0028
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 100
Training results
Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
---|---|---|---|---|---|---|
No log | 1.0 | 2 | 10.4622 | 0.0163 | 10.4628 | 3.2346 |
No log | 2.0 | 4 | 6.7688 | 0.0 | 6.7689 | 2.6017 |
No log | 3.0 | 6 | 6.1475 | -0.0020 | 6.1477 | 2.4795 |
No log | 4.0 | 8 | 5.3116 | 0.0102 | 5.3120 | 2.3048 |
No log | 5.0 | 10 | 4.3619 | 0.0039 | 4.3623 | 2.0886 |
No log | 6.0 | 12 | 3.6627 | 0.0 | 3.6632 | 1.9140 |
No log | 7.0 | 14 | 3.0070 | 0.0 | 3.0075 | 1.7342 |
No log | 8.0 | 16 | 2.4049 | 0.0331 | 2.4054 | 1.5509 |
No log | 9.0 | 18 | 2.0182 | 0.0189 | 2.0188 | 1.4209 |
No log | 10.0 | 20 | 1.4963 | 0.0280 | 1.4968 | 1.2235 |
No log | 11.0 | 22 | 1.2464 | 0.0107 | 1.2469 | 1.1167 |
No log | 12.0 | 24 | 0.9570 | 0.0213 | 0.9574 | 0.9785 |
No log | 13.0 | 26 | 0.8150 | 0.3739 | 0.8153 | 0.9030 |
No log | 14.0 | 28 | 0.8564 | 0.1086 | 0.8568 | 0.9256 |
No log | 15.0 | 30 | 0.7775 | 0.4449 | 0.7775 | 0.8818 |
No log | 16.0 | 32 | 0.7468 | 0.5013 | 0.7467 | 0.8641 |
No log | 17.0 | 34 | 0.8792 | 0.0836 | 0.8796 | 0.9379 |
No log | 18.0 | 36 | 0.5846 | 0.4503 | 0.5845 | 0.7645 |
No log | 19.0 | 38 | 0.5783 | 0.4806 | 0.5781 | 0.7604 |
No log | 20.0 | 40 | 0.8705 | 0.1218 | 0.8709 | 0.9332 |
No log | 21.0 | 42 | 0.9332 | 0.0368 | 0.9337 | 0.9663 |
No log | 22.0 | 44 | 0.5711 | 0.4025 | 0.5708 | 0.7555 |
No log | 23.0 | 46 | 0.5777 | 0.4185 | 0.5774 | 0.7599 |
No log | 24.0 | 48 | 0.7281 | 0.3066 | 0.7281 | 0.8533 |
No log | 25.0 | 50 | 0.6965 | 0.3927 | 0.6963 | 0.8344 |
No log | 26.0 | 52 | 1.0714 | 0.3609 | 1.0709 | 1.0348 |
No log | 27.0 | 54 | 0.7052 | 0.4 | 0.7051 | 0.8397 |
No log | 28.0 | 56 | 0.6946 | 0.3442 | 0.6945 | 0.8334 |
No log | 29.0 | 58 | 0.6313 | 0.4280 | 0.6308 | 0.7942 |
No log | 30.0 | 60 | 0.7233 | 0.3680 | 0.7233 | 0.8504 |
No log | 31.0 | 62 | 0.7028 | 0.4460 | 0.7024 | 0.8381 |
No log | 32.0 | 64 | 0.8469 | 0.4730 | 0.8461 | 0.9198 |
No log | 33.0 | 66 | 0.7176 | 0.4802 | 0.7170 | 0.8468 |
No log | 34.0 | 68 | 0.8306 | 0.3269 | 0.8306 | 0.9114 |
No log | 35.0 | 70 | 0.6809 | 0.4958 | 0.6802 | 0.8248 |
No log | 36.0 | 72 | 0.6984 | 0.5127 | 0.6976 | 0.8352 |
No log | 37.0 | 74 | 0.7714 | 0.4242 | 0.7712 | 0.8782 |
No log | 38.0 | 76 | 0.7812 | 0.3929 | 0.7810 | 0.8838 |
No log | 39.0 | 78 | 0.6830 | 0.5125 | 0.6823 | 0.8260 |
No log | 40.0 | 80 | 0.6968 | 0.5117 | 0.6962 | 0.8344 |
No log | 41.0 | 82 | 0.9363 | 0.3224 | 0.9361 | 0.9675 |
No log | 42.0 | 84 | 0.9321 | 0.3175 | 0.9322 | 0.9655 |
No log | 43.0 | 86 | 0.8141 | 0.5017 | 0.8133 | 0.9018 |
No log | 44.0 | 88 | 0.7837 | 0.4769 | 0.7831 | 0.8849 |
No log | 45.0 | 90 | 0.9317 | 0.3448 | 0.9318 | 0.9653 |
No log | 46.0 | 92 | 0.9740 | 0.2709 | 0.9739 | 0.9869 |
No log | 47.0 | 94 | 0.8248 | 0.4128 | 0.8243 | 0.9079 |
No log | 48.0 | 96 | 0.9863 | 0.2800 | 0.9865 | 0.9932 |
No log | 49.0 | 98 | 0.8704 | 0.3435 | 0.8704 | 0.9330 |
No log | 50.0 | 100 | 0.7633 | 0.4705 | 0.7627 | 0.8733 |
No log | 51.0 | 102 | 0.7719 | 0.4656 | 0.7714 | 0.8783 |
No log | 52.0 | 104 | 0.8499 | 0.4076 | 0.8499 | 0.9219 |
No log | 53.0 | 106 | 0.7772 | 0.4320 | 0.7769 | 0.8814 |
No log | 54.0 | 108 | 0.8962 | 0.3712 | 0.8961 | 0.9466 |
No log | 55.0 | 110 | 0.8803 | 0.3640 | 0.8803 | 0.9383 |
No log | 56.0 | 112 | 0.8137 | 0.3875 | 0.8135 | 0.9019 |
No log | 57.0 | 114 | 0.8070 | 0.4169 | 0.8067 | 0.8982 |
No log | 58.0 | 116 | 0.7858 | 0.4799 | 0.7852 | 0.8861 |
No log | 59.0 | 118 | 0.8167 | 0.4461 | 0.8161 | 0.9034 |
No log | 60.0 | 120 | 0.8647 | 0.3987 | 0.8643 | 0.9297 |
No log | 61.0 | 122 | 0.8598 | 0.4299 | 0.8593 | 0.9270 |
No log | 62.0 | 124 | 0.9330 | 0.3689 | 0.9329 | 0.9659 |
No log | 63.0 | 126 | 0.9015 | 0.3940 | 0.9013 | 0.9494 |
No log | 64.0 | 128 | 0.8413 | 0.4617 | 0.8408 | 0.9169 |
No log | 65.0 | 130 | 0.8875 | 0.4311 | 0.8872 | 0.9419 |
No log | 66.0 | 132 | 1.0056 | 0.3282 | 1.0055 | 1.0028 |
Framework versions
- Transformers 4.47.0
- Pytorch 2.5.1+cu121
- Datasets 3.3.1
- Tokenizers 0.21.0
- Downloads last month
- 328
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for genki10/BERT_V8_sp10_lw20_ex200_lo100_k3_k3_fold2
Base model
google-bert/bert-base-uncased