cwe-parent-vulnerability-classification-bert-base-uncased
This model is a fine-tuned version of bert-base-uncased on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9385
- Accuracy: 0.4270
- F1 Macro: 0.2048
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 40
Training results
Training Loss |
Epoch |
Step |
Validation Loss |
Accuracy |
F1 Macro |
3.2751 |
1.0 |
25 |
3.3057 |
0.0112 |
0.0019 |
3.1518 |
2.0 |
50 |
3.1378 |
0.0449 |
0.0337 |
3.0997 |
3.0 |
75 |
3.2138 |
0.0674 |
0.0376 |
2.9858 |
4.0 |
100 |
3.1921 |
0.1348 |
0.0806 |
2.9764 |
5.0 |
125 |
3.1470 |
0.2584 |
0.1308 |
2.8564 |
6.0 |
150 |
3.2075 |
0.3708 |
0.1465 |
2.8474 |
7.0 |
175 |
3.1799 |
0.3596 |
0.1653 |
2.7354 |
8.0 |
200 |
3.1618 |
0.3483 |
0.1412 |
2.6452 |
9.0 |
225 |
3.1248 |
0.3258 |
0.1531 |
2.5802 |
10.0 |
250 |
3.1154 |
0.3371 |
0.1488 |
2.5078 |
11.0 |
275 |
3.1547 |
0.3820 |
0.1712 |
2.4203 |
12.0 |
300 |
3.1410 |
0.3483 |
0.1549 |
2.3624 |
13.0 |
325 |
3.1409 |
0.4045 |
0.1776 |
2.3642 |
14.0 |
350 |
3.0964 |
0.2809 |
0.1496 |
2.2259 |
15.0 |
375 |
3.0960 |
0.3708 |
0.1904 |
2.1874 |
16.0 |
400 |
3.0170 |
0.3146 |
0.1653 |
2.15 |
17.0 |
425 |
3.0944 |
0.3146 |
0.1452 |
2.1051 |
18.0 |
450 |
3.0225 |
0.3258 |
0.1807 |
1.988 |
19.0 |
475 |
3.0687 |
0.3820 |
0.1539 |
1.9716 |
20.0 |
500 |
3.0054 |
0.3820 |
0.1675 |
1.9034 |
21.0 |
525 |
2.9834 |
0.3820 |
0.1985 |
1.8538 |
22.0 |
550 |
3.0251 |
0.3933 |
0.1942 |
1.8294 |
23.0 |
575 |
3.0231 |
0.3708 |
0.1579 |
1.7436 |
24.0 |
600 |
2.9719 |
0.4045 |
0.1976 |
1.7088 |
25.0 |
625 |
2.9701 |
0.4157 |
0.2138 |
1.7028 |
26.0 |
650 |
2.9724 |
0.4607 |
0.2250 |
1.6962 |
27.0 |
675 |
2.9385 |
0.4270 |
0.2048 |
1.5973 |
28.0 |
700 |
2.9636 |
0.4494 |
0.1904 |
1.5754 |
29.0 |
725 |
2.9441 |
0.5393 |
0.2116 |
1.5279 |
30.0 |
750 |
2.9785 |
0.5506 |
0.2373 |
1.5802 |
31.0 |
775 |
2.9711 |
0.5618 |
0.2346 |
1.4479 |
32.0 |
800 |
2.9884 |
0.5730 |
0.2335 |
1.484 |
33.0 |
825 |
3.0117 |
0.5730 |
0.2550 |
1.4243 |
34.0 |
850 |
2.9759 |
0.5843 |
0.2408 |
1.4473 |
35.0 |
875 |
2.9626 |
0.5955 |
0.2692 |
1.3875 |
36.0 |
900 |
2.9673 |
0.5843 |
0.2342 |
1.4214 |
37.0 |
925 |
2.9887 |
0.5843 |
0.2564 |
1.373 |
38.0 |
950 |
2.9894 |
0.6067 |
0.2728 |
1.3472 |
39.0 |
975 |
2.9805 |
0.5730 |
0.2311 |
1.336 |
40.0 |
1000 |
2.9836 |
0.5843 |
0.2439 |
Framework versions
- Transformers 4.55.4
- Pytorch 2.7.1+cu126
- Datasets 4.0.0
- Tokenizers 0.21.2