longformer-simple / meta_data /README_s42_e7.md
Theoreticallyhugo's picture
Training in progress, epoch 1
55a8b23 verified
|
raw
history blame
9.18 kB
metadata
license: apache-2.0
base_model: allenai/longformer-base-4096
tags:
  - generated_from_trainer
datasets:
  - essays_su_g
metrics:
  - accuracy
model-index:
  - name: longformer-simple
    results:
      - task:
          name: Token Classification
          type: token-classification
        dataset:
          name: essays_su_g
          type: essays_su_g
          config: simple
          split: train[80%:100%]
          args: simple
        metrics:
          - name: Accuracy
            type: accuracy
            value: 0.8472790470328397

longformer-simple

This model is a fine-tuned version of allenai/longformer-base-4096 on the essays_su_g dataset. It achieves the following results on the evaluation set:

  • Loss: 0.4453
  • Claim: {'precision': 0.6184998801821232, 'recall': 0.6192418426103646, 'f1-score': 0.6188706390121088, 'support': 4168.0}
  • Majorclaim: {'precision': 0.7669435942282467, 'recall': 0.8150557620817844, 'f1-score': 0.7902680783960352, 'support': 2152.0}
  • O: {'precision': 0.9382436260623229, 'recall': 0.897463689572946, 'f1-score': 0.9174006980222702, 'support': 9226.0}
  • Premise: {'precision': 0.8744932706340198, 'recall': 0.8933984925039344, 'f1-score': 0.8838447986233458, 'support': 12073.0}
  • Accuracy: 0.8473
  • Macro avg: {'precision': 0.7995450927766782, 'recall': 0.8062899466922574, 'f1-score': 0.80259605351344, 'support': 27619.0}
  • Weighted avg: {'precision': 0.8487766778592197, 'recall': 0.8472790470328397, 'f1-score': 0.8477753293690524, 'support': 27619.0}

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 7

Training results

Training Loss Epoch Step Validation Loss Claim Majorclaim O Premise Accuracy Macro avg Weighted avg
No log 1.0 41 0.5604 {'precision': 0.4942473633748802, 'recall': 0.2473608445297505, 'f1-score': 0.3297089862488008, 'support': 4168.0} {'precision': 0.5518606492478226, 'recall': 0.6477695167286245, 'f1-score': 0.5959811885421119, 'support': 2152.0} {'precision': 0.9073273343461493, 'recall': 0.8415347929763711, 'f1-score': 0.8731934994095485, 'support': 9226.0} {'precision': 0.784083044982699, 'recall': 0.9384577155636544, 'f1-score': 0.8543528258492629, 'support': 12073.0} 0.7791 {'precision': 0.6843795979878877, 'recall': 0.6687807174496001, 'f1-score': 0.663309125012431, 'support': 27619.0} {'precision': 0.7634187956291506, 'recall': 0.7791375502371556, 'f1-score': 0.761340507058846, 'support': 27619.0}
No log 2.0 82 0.4518 {'precision': 0.577831617201696, 'recall': 0.45777351247600767, 'f1-score': 0.510843373493976, 'support': 4168.0} {'precision': 0.6824378508420208, 'recall': 0.7908921933085502, 'f1-score': 0.7326732673267327, 'support': 2152.0} {'precision': 0.9419194900247905, 'recall': 0.8648384998916107, 'f1-score': 0.9017347573034978, 'support': 9226.0} {'precision': 0.8292390653085681, 'recall': 0.917087716391949, 'f1-score': 0.8709537856440511, 'support': 12073.0} 0.8205 {'precision': 0.7578570058442688, 'recall': 0.7576479805170293, 'f1-score': 0.7540512959420644, 'support': 27619.0} {'precision': 0.8175010277688459, 'recall': 0.8204858973894783, 'f1-score': 0.8161170924715856, 'support': 27619.0}
No log 3.0 123 0.4276 {'precision': 0.5879345603271984, 'recall': 0.5518234165067178, 'f1-score': 0.5693069306930693, 'support': 4168.0} {'precision': 0.6929858183211959, 'recall': 0.8401486988847584, 'f1-score': 0.7595043058181055, 'support': 2152.0} {'precision': 0.944093567251462, 'recall': 0.8749187079991328, 'f1-score': 0.9081908190819081, 'support': 9226.0} {'precision': 0.8596589097864201, 'recall': 0.8934813219580883, 'f1-score': 0.8762438568701515, 'support': 12073.0} 0.8316 {'precision': 0.7711682139215691, 'recall': 0.7900930363371743, 'f1-score': 0.7783114781158085, 'support': 27619.0} {'precision': 0.8338711031458205, 'recall': 0.831565226836598, 'f1-score': 0.8314995160611283, 'support': 27619.0}
No log 4.0 164 0.4280 {'precision': 0.6108695652173913, 'recall': 0.5393474088291746, 'f1-score': 0.5728848114169215, 'support': 4168.0} {'precision': 0.803450078410873, 'recall': 0.7142193308550185, 'f1-score': 0.7562115621156212, 'support': 2152.0} {'precision': 0.9037745879851143, 'recall': 0.921309343160633, 'f1-score': 0.9124577317374268, 'support': 9226.0} {'precision': 0.8595990808969178, 'recall': 0.89861674811563, 'f1-score': 0.8786749817769499, 'support': 12073.0} 0.8376 {'precision': 0.7944233281275741, 'recall': 0.7683732077401141, 'f1-score': 0.7800572717617298, 'support': 27619.0} {'precision': 0.832444801368096, 'recall': 0.8376117889858431, 'f1-score': 0.8342709462203975, 'support': 27619.0}
No log 5.0 205 0.4388 {'precision': 0.6131295414683037, 'recall': 0.5870921305182342, 'f1-score': 0.5998284103444048, 'support': 4168.0} {'precision': 0.746058798466127, 'recall': 0.8136617100371747, 'f1-score': 0.7783951989330962, 'support': 2152.0} {'precision': 0.935686543294494, 'recall': 0.8878170388033817, 'f1-score': 0.911123470522803, 'support': 9226.0} {'precision': 0.8646922647082302, 'recall': 0.8972086473950137, 'f1-score': 0.8806504065040651, 'support': 12073.0} 0.8408 {'precision': 0.7898917869842887, 'recall': 0.796444881688451, 'f1-score': 0.7924993715760923, 'support': 27619.0} {'precision': 0.841200486020365, 'recall': 0.8407617944168869, 'f1-score': 0.840483318700404, 'support': 27619.0}
No log 6.0 246 0.4455 {'precision': 0.61596495497688, 'recall': 0.6072456813819578, 'f1-score': 0.6115742418750755, 'support': 4168.0} {'precision': 0.7737881508078994, 'recall': 0.8011152416356877, 'f1-score': 0.7872146118721461, 'support': 2152.0} {'precision': 0.9405251141552512, 'recall': 0.8930197268588771, 'f1-score': 0.9161570110085622, 'support': 9226.0} {'precision': 0.8682319118351701, 'recall': 0.9005218255611696, 'f1-score': 0.8840821305143322, 'support': 12073.0} 0.8460 {'precision': 0.7996275329438002, 'recall': 0.800475618859423, 'f1-score': 0.7997569988175289, 'support': 27619.0} {'precision': 0.8469525546784674, 'recall': 0.8460118034686267, 'f1-score': 0.846124603720218, 'support': 27619.0}
No log 7.0 287 0.4453 {'precision': 0.6184998801821232, 'recall': 0.6192418426103646, 'f1-score': 0.6188706390121088, 'support': 4168.0} {'precision': 0.7669435942282467, 'recall': 0.8150557620817844, 'f1-score': 0.7902680783960352, 'support': 2152.0} {'precision': 0.9382436260623229, 'recall': 0.897463689572946, 'f1-score': 0.9174006980222702, 'support': 9226.0} {'precision': 0.8744932706340198, 'recall': 0.8933984925039344, 'f1-score': 0.8838447986233458, 'support': 12073.0} 0.8473 {'precision': 0.7995450927766782, 'recall': 0.8062899466922574, 'f1-score': 0.80259605351344, 'support': 27619.0} {'precision': 0.8487766778592197, 'recall': 0.8472790470328397, 'f1-score': 0.8477753293690524, 'support': 27619.0}

Framework versions

  • Transformers 4.37.2
  • Pytorch 2.2.0+cu121
  • Datasets 2.17.0
  • Tokenizers 0.15.2