metadata
library_name: transformers
language:
- en
license: apache-2.0
base_model: google/bert_uncased_L-4_H-512_A-8
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: bert_uncased_L-4_H-512_A-8_qnli
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE QNLI
type: glue
args: qnli
metrics:
- name: Accuracy
type: accuracy
value: 0.8735127219476478
bert_uncased_L-4_H-512_A-8_qnli
This model is a fine-tuned version of google/bert_uncased_L-4_H-512_A-8 on the GLUE QNLI dataset. It achieves the following results on the evaluation set:
- Loss: 0.3148
- Accuracy: 0.8735
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 10
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 50
Training results
Training Loss | Epoch | Step | Validation Loss | Accuracy |
---|---|---|---|---|
0.437 | 1.0 | 410 | 0.3448 | 0.8521 |
0.3392 | 2.0 | 820 | 0.3215 | 0.8607 |
0.2746 | 3.0 | 1230 | 0.3148 | 0.8735 |
0.2175 | 4.0 | 1640 | 0.3549 | 0.8702 |
0.1712 | 5.0 | 2050 | 0.4000 | 0.8580 |
0.1311 | 6.0 | 2460 | 0.4335 | 0.8649 |
0.1065 | 7.0 | 2870 | 0.4819 | 0.8642 |
0.0849 | 8.0 | 3280 | 0.5127 | 0.8667 |
Framework versions
- Transformers 4.46.3
- Pytorch 2.2.1+cu118
- Datasets 2.17.0
- Tokenizers 0.20.3