alecmontero's picture
xlm-roberta-meta4types-ft-ES
9df40d4 verified
|
raw
history blame
4.35 kB
metadata
base_model: cardiffnlp/twitter-xlm-roberta-base-sentiment
tags:
  - generated_from_trainer
metrics:
  - accuracy
  - precision
  - recall
model-index:
  - name: xlm-roberta-meta4types-ft
    results: []

xlm-roberta-meta4types-ft

This model is a fine-tuned version of cardiffnlp/twitter-xlm-roberta-base-sentiment on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.8324
  • Roc Auc: 0.7122
  • Hamming Loss: 0.2261
  • F1 Score: 0.6089
  • Accuracy: 0.5528
  • Precision: 0.6081
  • Recall: 0.6436
  • Per Label: {'f1_score': 0.608905822183525, 'precision': 0.6080571799870046, 'recall': 0.6435841440010588, 'support': 235}

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • num_epochs: 10

Training results

Training Loss Epoch Step Validation Loss Roc Auc Hamming Loss F1 Score Accuracy Precision Recall Per Label
0.4279 1.0 199 0.5287 0.4967 0.2496 0.3209 0.5276 0.6759 0.3575 {'f1_score': 0.3208852937872149, 'precision': 0.6759286629224553, 'recall': 0.35748792270531404, 'support': 235}
0.4609 2.0 398 0.5076 0.5276 0.2245 0.3757 0.5779 0.8026 0.3913 {'f1_score': 0.3757246741060956, 'precision': 0.8025944726452341, 'recall': 0.3913043478260869, 'support': 235}
0.5875 3.0 597 0.5463 0.5557 0.2127 0.4232 0.6080 0.6653 0.4153 {'f1_score': 0.42320834457332973, 'precision': 0.6653348029760265, 'recall': 0.41534974521871487, 'support': 235}
0.493 4.0 796 0.5526 0.6428 0.2077 0.5744 0.6080 0.6577 0.5455 {'f1_score': 0.5744086944086945, 'precision': 0.6577216876443267, 'recall': 0.5455495996294091, 'support': 235}
0.3519 5.0 995 0.6760 0.6795 0.2161 0.5809 0.5879 0.6192 0.5961 {'f1_score': 0.5809003977320809, 'precision': 0.6191632544737641, 'recall': 0.5960790152868771, 'support': 235}
0.2451 6.0 1194 0.7729 0.7046 0.2312 0.6045 0.5578 0.6161 0.6045 {'f1_score': 0.6045152483631816, 'precision': 0.6161038489469862, 'recall': 0.6044603269141685, 'support': 235}
0.0608 7.0 1393 0.7616 0.6942 0.2127 0.6060 0.5779 0.6221 0.6095 {'f1_score': 0.6060266030810951, 'precision': 0.6220689655172414, 'recall': 0.6094566871815233, 'support': 235}
0.0859 8.0 1592 0.8324 0.7122 0.2261 0.6089 0.5528 0.6081 0.6436 {'f1_score': 0.608905822183525, 'precision': 0.6080571799870046, 'recall': 0.6435841440010588, 'support': 235}
0.0767 9.0 1791 0.8192 0.6950 0.2127 0.6004 0.5578 0.6086 0.6073 {'f1_score': 0.6003549503292779, 'precision': 0.6086247086247086, 'recall': 0.6072827741380452, 'support': 235}
0.0221 10.0 1990 0.8094 0.6975 0.2077 0.6135 0.5578 0.6116 0.6215 {'f1_score': 0.6135398054397458, 'precision': 0.6116043923140263, 'recall': 0.6215108199324995, 'support': 235}

Framework versions

  • Transformers 4.43.1
  • Pytorch 1.13.1+cu116
  • Datasets 2.20.0
  • Tokenizers 0.19.1