claim-judge

This model is a fine-tuned version of bert-base-uncased on the None dataset. It achieves the following results on the evaluation set:

  • Accuracy: 0.7623
  • Loss: 1.5465

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 160
  • eval_batch_size: 160
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 36

Training results

Training Loss Epoch Step Accuracy Validation Loss
0.4968 1.0 1041 0.7587 0.6138
0.4174 2.0 2082 0.7695 0.6211
0.4036 3.0 3123 0.7733 0.6181
0.3415 4.0 4164 0.7772 0.6450
0.2868 5.0 5205 0.7738 0.6896
0.2453 6.0 6246 0.7763 0.7119
0.2003 7.0 7287 0.7728 0.8254
0.1683 8.0 8328 0.7712 0.9288
0.1439 9.0 9369 0.7764 0.8993
0.1197 10.0 10410 0.7729 0.9819
0.102 11.0 11451 0.7709 1.0478
0.088 12.0 12492 0.7692 1.1574
0.087 13.0 13533 0.7709 1.0969
0.0779 14.0 14574 0.7661 1.2575
0.0695 15.0 15615 0.7658 1.3540
0.0664 16.0 16656 0.7719 1.2155
0.058 17.0 17697 0.7654 1.3065
0.0533 18.0 18738 0.7674 1.3535
0.0496 19.0 19779 0.7663 1.3327
0.0459 20.0 20820 0.7686 1.3893
0.0432 21.0 21861 0.7691 1.4211
0.0396 22.0 22902 0.7682 1.4810
0.0371 23.0 23943 0.7705 1.4926
0.0338 24.0 24984 0.7633 1.5058
0.037 25.0 26025 0.7604 1.4986
0.034 26.0 27066 0.7611 1.5314
0.0317 27.0 28107 0.7659 1.4636
0.0312 28.0 29148 0.7658 1.5006
0.0282 29.0 30189 0.7672 1.4250
0.0282 30.0 31230 0.7662 1.4904
0.0264 31.0 32271 0.7669 1.5415
0.0253 32.0 33312 0.7679 1.6110
0.0257 33.0 34353 0.7645 1.6097
0.0233 34.0 35394 0.7614 1.6646
0.0251 35.0 36435 0.7651 1.6080
0.0236 36.0 37476 0.7699 1.5824
0.025 37.0 38517 0.7623 1.5465

Framework versions

  • Transformers 4.29.2
  • Pytorch 2.0.1
  • Datasets 2.12.0
  • Tokenizers 0.13.3
Downloads last month
21
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.