longformer-spans / README.md
Theoreticallyhugo's picture
trainer: training complete at 2024-02-18 20:32:18.334461.
9bcd5f0 verified
|
raw
history blame
No virus
6.68 kB
metadata
license: apache-2.0
base_model: allenai/longformer-base-4096
tags:
  - generated_from_trainer
datasets:
  - essays_su_g
metrics:
  - accuracy
model-index:
  - name: longformer-spans
    results:
      - task:
          name: Token Classification
          type: token-classification
        dataset:
          name: essays_su_g
          type: essays_su_g
          config: spans
          split: test
          args: spans
        metrics:
          - name: Accuracy
            type: accuracy
            value: 0.9393385646207316

longformer-spans

This model is a fine-tuned version of allenai/longformer-base-4096 on the essays_su_g dataset. It achieves the following results on the evaluation set:

  • Loss: 0.1675
  • B: {'precision': 0.8321678321678322, 'recall': 0.898961284230406, 'f1-score': 0.864275987290059, 'support': 1059.0}
  • I: {'precision': 0.9499635384529085, 'recall': 0.9635846372688478, 'f1-score': 0.956725608722671, 'support': 17575.0}
  • O: {'precision': 0.9318639516670396, 'recall': 0.8980053908355795, 'f1-score': 0.9146214242573986, 'support': 9275.0}
  • Accuracy: 0.9393
  • Macro avg: {'precision': 0.9046651074292601, 'recall': 0.9201837707782777, 'f1-score': 0.9118743400900429, 'support': 27909.0}
  • Weighted avg: {'precision': 0.939478772950926, 'recall': 0.9393385646207316, 'f1-score': 0.9392251443558882, 'support': 27909.0}

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 5

Training results

Training Loss Epoch Step Validation Loss B I O Accuracy Macro avg Weighted avg
No log 1.0 41 0.2773 {'precision': 0.7656804733727811, 'recall': 0.6109537299339, 'f1-score': 0.6796218487394958, 'support': 1059.0} {'precision': 0.9200088755755256, 'recall': 0.943669985775249, 'f1-score': 0.931689230942082, 'support': 17575.0} {'precision': 0.8860241230496846, 'recall': 0.8632884097035041, 'f1-score': 0.8745085190039319, 'support': 9275.0} 0.9043 {'precision': 0.8572378239993305, 'recall': 0.8059707084708844, 'f1-score': 0.82860653289517, 'support': 27909.0} {'precision': 0.9028587678106511, 'recall': 0.9043319359346448, 'f1-score': 0.9031217272343576, 'support': 27909.0}
No log 2.0 82 0.1955 {'precision': 0.7943201376936316, 'recall': 0.8715769593956563, 'f1-score': 0.8311571364250337, 'support': 1059.0} {'precision': 0.9362793776895068, 'recall': 0.9656330014224751, 'f1-score': 0.9507296714377748, 'support': 17575.0} {'precision': 0.9372462591346712, 'recall': 0.8711590296495957, 'f1-score': 0.9029950827000447, 'support': 9275.0} 0.9307 {'precision': 0.8892819248392699, 'recall': 0.9027896634892424, 'f1-score': 0.8949606301876177, 'support': 27909.0} {'precision': 0.9312140937398226, 'recall': 0.9306675266043212, 'f1-score': 0.9303288822614897, 'support': 27909.0}
No log 3.0 123 0.1872 {'precision': 0.7751385589865399, 'recall': 0.9244570349386213, 'f1-score': 0.8432385874246339, 'support': 1059.0} {'precision': 0.9386327328816174, 'recall': 0.96950213371266, 'f1-score': 0.9538177339901479, 'support': 17575.0} {'precision': 0.9483103732485576, 'recall': 0.868355795148248, 'f1-score': 0.9065736154885187, 'support': 9275.0} 0.9342 {'precision': 0.8873605550389051, 'recall': 0.9207716545998431, 'f1-score': 0.9012099789677669, 'support': 27909.0} {'precision': 0.9356451584163368, 'recall': 0.9341789386936113, 'f1-score': 0.933921194690442, 'support': 27909.0}
No log 4.0 164 0.1684 {'precision': 0.8173322005097706, 'recall': 0.9084041548630784, 'f1-score': 0.8604651162790699, 'support': 1059.0} {'precision': 0.9426896055761464, 'recall': 0.9696159317211949, 'f1-score': 0.9559631998204869, 'support': 17575.0} {'precision': 0.9440785673021375, 'recall': 0.8809703504043127, 'f1-score': 0.9114333519241495, 'support': 9275.0} 0.9378 {'precision': 0.9013667911293516, 'recall': 0.9196634789961954, 'f1-score': 0.9092872226745689, 'support': 27909.0} {'precision': 0.938394544056324, 'recall': 0.9378336737253216, 'f1-score': 0.9375409414196524, 'support': 27909.0}
No log 5.0 205 0.1675 {'precision': 0.8321678321678322, 'recall': 0.898961284230406, 'f1-score': 0.864275987290059, 'support': 1059.0} {'precision': 0.9499635384529085, 'recall': 0.9635846372688478, 'f1-score': 0.956725608722671, 'support': 17575.0} {'precision': 0.9318639516670396, 'recall': 0.8980053908355795, 'f1-score': 0.9146214242573986, 'support': 9275.0} 0.9393 {'precision': 0.9046651074292601, 'recall': 0.9201837707782777, 'f1-score': 0.9118743400900429, 'support': 27909.0} {'precision': 0.939478772950926, 'recall': 0.9393385646207316, 'f1-score': 0.9392251443558882, 'support': 27909.0}

Framework versions

  • Transformers 4.37.2
  • Pytorch 2.2.0+cu121
  • Datasets 2.17.0
  • Tokenizers 0.15.2