longformer-simple / README.md
Theoreticallyhugo's picture
trainer: training complete at 2024-02-19 19:45:35.339611.
2d4d351 verified
|
raw
history blame
6.88 kB
metadata
license: apache-2.0
base_model: allenai/longformer-base-4096
tags:
  - generated_from_trainer
datasets:
  - essays_su_g
metrics:
  - accuracy
model-index:
  - name: longformer-simple
    results:
      - task:
          name: Token Classification
          type: token-classification
        dataset:
          name: essays_su_g
          type: essays_su_g
          config: simple
          split: test
          args: simple
        metrics:
          - name: Accuracy
            type: accuracy
            value: 0.8280482998315956

longformer-simple

This model is a fine-tuned version of allenai/longformer-base-4096 on the essays_su_g dataset. It achieves the following results on the evaluation set:

  • Loss: 0.4474
  • Claim: {'precision': 0.5788206979542719, 'recall': 0.5656161806208843, 'f1-score': 0.5721422624003807, 'support': 4252.0}
  • Majorclaim: {'precision': 0.6985815602836879, 'recall': 0.812557286892759, 'f1-score': 0.751271186440678, 'support': 2182.0}
  • O: {'precision': 0.93909038572251, 'recall': 0.8793530997304583, 'f1-score': 0.9082405345211582, 'support': 9275.0}
  • Premise: {'precision': 0.8599473306200622, 'recall': 0.8832786885245901, 'f1-score': 0.8714568759856051, 'support': 12200.0}
  • Accuracy: 0.8280
  • Macro avg: {'precision': 0.7691099936451331, 'recall': 0.7852013139421729, 'f1-score': 0.7757777148369556, 'support': 27909.0}
  • Weighted avg: {'precision': 0.8308026562535961, 'recall': 0.8280482998315956, 'f1-score': 0.828683488238493, 'support': 27909.0}

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 4

Training results

Training Loss Epoch Step Validation Loss Claim Majorclaim O Premise Accuracy Macro avg Weighted avg
No log 1.0 41 0.5887 {'precision': 0.4995083579154376, 'recall': 0.2389463781749765, 'f1-score': 0.32325803372573975, 'support': 4252.0} {'precision': 0.5970350404312669, 'recall': 0.4060494958753437, 'f1-score': 0.4833606110201855, 'support': 2182.0} {'precision': 0.8159389073820247, 'recall': 0.898544474393531, 'f1-score': 0.8552516804351173, 'support': 9275.0} {'precision': 0.7941031247795726, 'recall': 0.9227868852459017, 'f1-score': 0.8536224741251849, 'support': 12200.0} 0.7701 {'precision': 0.6766463576270755, 'recall': 0.6165818084224383, 'f1-score': 0.6288731998265569, 'support': 27909.0} {'precision': 0.7410703172581077, 'recall': 0.7701458310939123, 'f1-score': 0.7444136132792598, 'support': 27909.0}
No log 2.0 82 0.4737 {'precision': 0.5664355062413314, 'recall': 0.48024459078080906, 'f1-score': 0.5197912689321624, 'support': 4252.0} {'precision': 0.707936507936508, 'recall': 0.7153987167736022, 'f1-score': 0.7116480510599499, 'support': 2182.0} {'precision': 0.9119831504267819, 'recall': 0.8870080862533692, 'f1-score': 0.8993222562308703, 'support': 9275.0} {'precision': 0.8385838813274201, 'recall': 0.8989344262295081, 'f1-score': 0.8677110530896431, 'support': 12200.0} 0.8168 {'precision': 0.7562347614830104, 'recall': 0.7453964550093222, 'f1-score': 0.7496181573281565, 'support': 27909.0} {'precision': 0.8112998783639159, 'recall': 0.81683327958723, 'f1-score': 0.8130086100235527, 'support': 27909.0}
No log 3.0 123 0.4448 {'precision': 0.6023609816713265, 'recall': 0.4560206961429915, 'f1-score': 0.5190737518404497, 'support': 4252.0} {'precision': 0.7517178195144297, 'recall': 0.7520623281393217, 'f1-score': 0.7518900343642613, 'support': 2182.0} {'precision': 0.9046644403748788, 'recall': 0.9054447439353099, 'f1-score': 0.9050544239681, 'support': 9275.0} {'precision': 0.8368874773139746, 'recall': 0.9071311475409836, 'f1-score': 0.8705947136563877, 'support': 12200.0} 0.8257 {'precision': 0.7739076797186524, 'recall': 0.7551647289396517, 'f1-score': 0.7616532309572996, 'support': 27909.0} {'precision': 0.8170223613871673, 'recall': 0.8257193020172704, 'f1-score': 0.8192110407653613, 'support': 27909.0}
No log 4.0 164 0.4474 {'precision': 0.5788206979542719, 'recall': 0.5656161806208843, 'f1-score': 0.5721422624003807, 'support': 4252.0} {'precision': 0.6985815602836879, 'recall': 0.812557286892759, 'f1-score': 0.751271186440678, 'support': 2182.0} {'precision': 0.93909038572251, 'recall': 0.8793530997304583, 'f1-score': 0.9082405345211582, 'support': 9275.0} {'precision': 0.8599473306200622, 'recall': 0.8832786885245901, 'f1-score': 0.8714568759856051, 'support': 12200.0} 0.8280 {'precision': 0.7691099936451331, 'recall': 0.7852013139421729, 'f1-score': 0.7757777148369556, 'support': 27909.0} {'precision': 0.8308026562535961, 'recall': 0.8280482998315956, 'f1-score': 0.828683488238493, 'support': 27909.0}

Framework versions

  • Transformers 4.37.2
  • Pytorch 2.2.0+cu121
  • Datasets 2.17.0
  • Tokenizers 0.15.2