Edit model card

arabert_baseline_augmented_organization_task3_fold0

This model is a fine-tuned version of aubmindlab/bert-base-arabertv02 on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 1.2214
  • Qwk: 0.0435
  • Mse: 1.2214
  • Rmse: 1.1052

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 10

Training results

Training Loss Epoch Step Validation Loss Qwk Mse Rmse
No log 0.25 2 3.4302 0.0 3.4302 1.8521
No log 0.5 4 1.6583 0.0128 1.6583 1.2878
No log 0.75 6 0.8510 0.0435 0.8510 0.9225
No log 1.0 8 0.9134 -0.0185 0.9134 0.9557
No log 1.25 10 0.7543 0.2029 0.7543 0.8685
No log 1.5 12 0.8264 0.0 0.8264 0.9091
No log 1.75 14 0.8400 0.0530 0.8400 0.9165
No log 2.0 16 0.8947 -0.0421 0.8947 0.9459
No log 2.25 18 0.8986 -0.0732 0.8986 0.9479
No log 2.5 20 0.8985 -0.1159 0.8985 0.9479
No log 2.75 22 0.9015 -0.1440 0.9015 0.9495
No log 3.0 24 0.9418 -0.0732 0.9418 0.9705
No log 3.25 26 1.0123 0.1872 1.0123 1.0061
No log 3.5 28 1.1713 -0.0833 1.1713 1.0823
No log 3.75 30 1.1238 -0.0377 1.1238 1.0601
No log 4.0 32 1.1146 -0.1440 1.1146 1.0557
No log 4.25 34 1.1077 -0.1440 1.1077 1.0525
No log 4.5 36 1.0752 -0.1786 1.0752 1.0369
No log 4.75 38 1.0399 -0.1786 1.0399 1.0198
No log 5.0 40 1.1095 -0.1440 1.1095 1.0533
No log 5.25 42 1.2775 -0.0377 1.2775 1.1303
No log 5.5 44 1.3060 -0.2222 1.3060 1.1428
No log 5.75 46 1.1823 -0.1608 1.1823 1.0873
No log 6.0 48 1.1076 -0.1440 1.1076 1.0524
No log 6.25 50 1.1222 -0.1440 1.1222 1.0593
No log 6.5 52 1.1825 -0.0645 1.1825 1.0874
No log 6.75 54 1.2006 -0.0645 1.2006 1.0957
No log 7.0 56 1.2286 -0.0645 1.2286 1.1084
No log 7.25 58 1.2391 -0.0645 1.2391 1.1132
No log 7.5 60 1.2547 -0.0645 1.2547 1.1201
No log 7.75 62 1.2981 -0.1786 1.2981 1.1393
No log 8.0 64 1.3179 -0.1786 1.3179 1.1480
No log 8.25 66 1.3240 -0.1786 1.3240 1.1506
No log 8.5 68 1.2964 -0.1786 1.2964 1.1386
No log 8.75 70 1.2467 -0.1786 1.2467 1.1166
No log 9.0 72 1.2120 0.0435 1.2120 1.1009
No log 9.25 74 1.2089 -0.0809 1.2089 1.0995
No log 9.5 76 1.2133 -0.0809 1.2133 1.1015
No log 9.75 78 1.2185 0.0435 1.2185 1.1038
No log 10.0 80 1.2214 0.0435 1.2214 1.1052

Framework versions

  • Transformers 4.44.2
  • Pytorch 2.4.0+cu118
  • Datasets 2.21.0
  • Tokenizers 0.19.1
Downloads last month
2
Safetensors
Model size
135M params
Tensor type
F32
·
Inference Examples
Inference API (serverless) is not available, repository is disabled.

Model tree for MayBashendy/arabert_baseline_augmented_organization_task3_fold0

Finetuned
this model