ModernBERT-large_nli
This model is a fine-tuned version of answerdotai/ModernBERT-large on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 2.6038
- Accuracy: 0.5787
- Precision Macro: 0.5794
- Recall Macro: 0.5790
- F1 Macro: 0.5792
- F1 Weighted: 0.5788
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 20
- mixed_precision_training: Native AMP
Training results
Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision Macro | Recall Macro | F1 Macro | F1 Weighted |
---|---|---|---|---|---|---|---|---|
2.1283 | 1.0 | 143 | 1.0136 | 0.4807 | 0.4674 | 0.4835 | 0.4509 | 0.4492 |
1.8848 | 2.0 | 286 | 0.9818 | 0.5202 | 0.5745 | 0.5219 | 0.5042 | 0.5038 |
1.7416 | 3.0 | 429 | 1.1233 | 0.3220 | 0.2102 | 0.3259 | 0.2190 | 0.2174 |
2.2168 | 4.0 | 572 | 1.1135 | 0.3277 | 0.1092 | 0.3333 | 0.1646 | 0.1618 |
2.2099 | 5.0 | 715 | 1.1089 | 0.3277 | 0.1092 | 0.3333 | 0.1646 | 0.1618 |
2.2191 | 6.0 | 858 | 1.1231 | 0.3282 | 0.4426 | 0.3338 | 0.1655 | 0.1627 |
2.2027 | 7.0 | 1001 | 1.0931 | 0.3774 | 0.2508 | 0.3801 | 0.3016 | 0.2993 |
2.1846 | 8.0 | 1144 | 1.0723 | 0.4013 | 0.3861 | 0.3995 | 0.3692 | 0.3705 |
2.1232 | 9.0 | 1287 | 1.0461 | 0.4244 | 0.4225 | 0.4248 | 0.4203 | 0.4202 |
2.0586 | 10.0 | 1430 | 1.0345 | 0.4510 | 0.4495 | 0.4494 | 0.4210 | 0.4220 |
2.0578 | 11.0 | 1573 | 1.0390 | 0.4523 | 0.4797 | 0.4511 | 0.4522 | 0.4525 |
2.0289 | 12.0 | 1716 | 1.0626 | 0.4665 | 0.5296 | 0.4668 | 0.4391 | 0.4389 |
1.5688 | 13.0 | 1859 | 0.8686 | 0.6084 | 0.6082 | 0.6089 | 0.6064 | 0.6061 |
1.2262 | 14.0 | 2002 | 0.9452 | 0.5973 | 0.5972 | 0.5978 | 0.5961 | 0.5958 |
0.6694 | 15.0 | 2145 | 1.2849 | 0.5809 | 0.5809 | 0.5817 | 0.5802 | 0.5798 |
0.2152 | 16.0 | 2288 | 1.9241 | 0.5752 | 0.5760 | 0.5753 | 0.5755 | 0.5753 |
0.043 | 17.0 | 2431 | 2.3196 | 0.5672 | 0.5685 | 0.5673 | 0.5675 | 0.5672 |
0.0074 | 18.0 | 2574 | 2.5393 | 0.5734 | 0.5747 | 0.5736 | 0.5740 | 0.5737 |
0.0015 | 19.0 | 2717 | 2.5970 | 0.5769 | 0.5780 | 0.5772 | 0.5776 | 0.5772 |
0.002 | 20.0 | 2860 | 2.6038 | 0.5787 | 0.5794 | 0.5790 | 0.5792 | 0.5788 |
Framework versions
- Transformers 4.55.0
- Pytorch 2.7.0+cu126
- Datasets 4.0.0
- Tokenizers 0.21.4
- Downloads last month
- -
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for aiface/ModernBERT-large_nli
Base model
answerdotai/ModernBERT-large