longformer-simple / README.md
Theoreticallyhugo's picture
trainer: training complete at 2024-03-02 12:59:22.037773.
0fcbbe5 verified
|
raw
history blame
No virus
16.1 kB
metadata
license: apache-2.0
base_model: allenai/longformer-base-4096
tags:
  - generated_from_trainer
datasets:
  - essays_su_g
metrics:
  - accuracy
model-index:
  - name: longformer-simple
    results:
      - task:
          name: Token Classification
          type: token-classification
        dataset:
          name: essays_su_g
          type: essays_su_g
          config: simple
          split: train[0%:20%]
          args: simple
        metrics:
          - name: Accuracy
            type: accuracy
            value: 0.8390263857639599

longformer-simple

This model is a fine-tuned version of allenai/longformer-base-4096 on the essays_su_g dataset. It achieves the following results on the evaluation set:

  • Loss: 0.7323
  • Claim: {'precision': 0.5949216896060749, 'recall': 0.5882214922571563, 'f1-score': 0.5915526191599811, 'support': 4262.0}
  • Majorclaim: {'precision': 0.8048654244306418, 'recall': 0.7182448036951501, 'f1-score': 0.7590920185501586, 'support': 2165.0}
  • O: {'precision': 0.9059811340313051, 'recall': 0.8856911228212404, 'f1-score': 0.8957212400717396, 'support': 9868.0}
  • Premise: {'precision': 0.8721660143268591, 'recall': 0.9057443055449037, 'f1-score': 0.8886380737396539, 'support': 13039.0}
  • Accuracy: 0.8390
  • Macro avg: {'precision': 0.7944835655987202, 'recall': 0.7744754310796127, 'f1-score': 0.7837509878803832, 'support': 29334.0}
  • Weighted avg: {'precision': 0.8382929152663212, 'recall': 0.8390263857639599, 'f1-score': 0.8382955111317995, 'support': 29334.0}

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 16

Training results

Training Loss Epoch Step Validation Loss Claim Majorclaim O Premise Accuracy Macro avg Weighted avg
No log 1.0 41 0.6210 {'precision': 0.3978787878787879, 'recall': 0.3080713280150164, 'f1-score': 0.3472626289341444, 'support': 4262.0} {'precision': 0.5235765124555161, 'recall': 0.54364896073903, 'f1-score': 0.5334239746204397, 'support': 2165.0} {'precision': 0.9157377442167086, 'recall': 0.7742197000405351, 'f1-score': 0.8390533194223273, 'support': 9868.0} {'precision': 0.7896134170821731, 'recall': 0.9351944167497508, 'f1-score': 0.8562600940945159, 'support': 13039.0} 0.7610 {'precision': 0.6567016154082965, 'recall': 0.6402836013860831, 'f1-score': 0.6440000042678569, 'support': 29334.0} {'precision': 0.7554909643645776, 'recall': 0.7610281584509443, 'f1-score': 0.7526914076678427, 'support': 29334.0}
No log 2.0 82 0.5057 {'precision': 0.5328757225433526, 'recall': 0.34608165180666356, 'f1-score': 0.4196301564722618, 'support': 4262.0} {'precision': 0.6262842465753424, 'recall': 0.6757505773672056, 'f1-score': 0.6500777604976672, 'support': 2165.0} {'precision': 0.9066595059076262, 'recall': 0.8553911633563032, 'f1-score': 0.8802794869120867, 'support': 9868.0} {'precision': 0.821313672922252, 'recall': 0.9397959966255081, 'f1-score': 0.8765692621338388, 'support': 13039.0} 0.8057 {'precision': 0.7217832869871433, 'recall': 0.7042548472889201, 'f1-score': 0.7066391665039636, 'support': 29334.0} {'precision': 0.7937221895699558, 'recall': 0.8056521442694484, 'f1-score': 0.7947114837449316, 'support': 29334.0}
No log 3.0 123 0.4707 {'precision': 0.542234931808183, 'recall': 0.5783669638667293, 'f1-score': 0.5597184377838329, 'support': 4262.0} {'precision': 0.669374492282697, 'recall': 0.7612009237875289, 'f1-score': 0.7123406094661768, 'support': 2165.0} {'precision': 0.9139037996000421, 'recall': 0.8799148763680583, 'f1-score': 0.896587330270019, 'support': 9868.0} {'precision': 0.8872514619883041, 'recall': 0.8726896234373802, 'f1-score': 0.879910300030931, 'support': 13039.0} 0.8241 {'precision': 0.7531911714198065, 'recall': 0.7730430968649242, 'f1-score': 0.76213916938774, 'support': 29334.0} {'precision': 0.8300087121591747, 'recall': 0.8241289970682485, 'f1-score': 0.8266316076408545, 'support': 29334.0}
No log 4.0 164 0.4995 {'precision': 0.5606635071090047, 'recall': 0.5551384326607227, 'f1-score': 0.5578872907333177, 'support': 4262.0} {'precision': 0.748995983935743, 'recall': 0.6891454965357968, 'f1-score': 0.7178253548231899, 'support': 2165.0} {'precision': 0.9082793070464449, 'recall': 0.8660316173490069, 'f1-score': 0.8866524874202418, 'support': 9868.0} {'precision': 0.8605702617953767, 'recall': 0.9050540685635402, 'f1-score': 0.8822517942583731, 'support': 13039.0} 0.8252 {'precision': 0.7696272649716422, 'recall': 0.7538424037772666, 'f1-score': 0.7611542318087806, 'support': 29334.0} {'precision': 0.8248108003682995, 'recall': 0.8251517010977023, 'f1-score': 0.8244690603905188, 'support': 29334.0}
No log 5.0 205 0.5356 {'precision': 0.5562700964630225, 'recall': 0.5682778038479587, 'f1-score': 0.5622098421541318, 'support': 4262.0} {'precision': 0.7994186046511628, 'recall': 0.6351039260969977, 'f1-score': 0.7078507078507079, 'support': 2165.0} {'precision': 0.9167929019692708, 'recall': 0.858633968382651, 'f1-score': 0.8867608581894296, 'support': 9868.0} {'precision': 0.8533314310172635, 'recall': 0.9174016412301557, 'f1-score': 0.8842074139778985, 'support': 13039.0} 0.8261 {'precision': 0.7814532585251799, 'recall': 0.7448543348894408, 'f1-score': 0.760257205543042, 'support': 29334.0} {'precision': 0.827540237126271, 'recall': 0.8260721347242108, 'f1-score': 0.8252666444817892, 'support': 29334.0}
No log 6.0 246 0.5402 {'precision': 0.5901198337001712, 'recall': 0.5661661191928672, 'f1-score': 0.5778948628906718, 'support': 4262.0} {'precision': 0.7593778591033852, 'recall': 0.766743648960739, 'f1-score': 0.7630429786256032, 'support': 2165.0} {'precision': 0.909998948585848, 'recall': 0.877077421970004, 'f1-score': 0.8932349450436038, 'support': 9868.0} {'precision': 0.8704605845881311, 'recall': 0.9044405245801058, 'f1-score': 0.8871252867942979, 'support': 13039.0} 0.8359 {'precision': 0.7824893064943839, 'recall': 0.778606928675929, 'f1-score': 0.7803245183385442, 'support': 29334.0} {'precision': 0.8348315600763192, 'recall': 0.8359241835412832, 'f1-score': 0.8350939185438606, 'support': 29334.0}
No log 7.0 287 0.5522 {'precision': 0.5645161290322581, 'recall': 0.6241201313937119, 'f1-score': 0.5928237129485181, 'support': 4262.0} {'precision': 0.776257938446507, 'recall': 0.7339491916859122, 'f1-score': 0.7545109211775878, 'support': 2165.0} {'precision': 0.9063338147307612, 'recall': 0.8903526550466153, 'f1-score': 0.8982721603108067, 'support': 9868.0} {'precision': 0.8897601117925626, 'recall': 0.8789784492675818, 'f1-score': 0.8843364197530864, 'support': 13039.0} 0.8351 {'precision': 0.7842169985005223, 'recall': 0.7818501068484554, 'f1-score': 0.7824858035474997, 'support': 29334.0} {'precision': 0.8397030872059231, 'recall': 0.835071930183405, 'f1-score': 0.8370881251804593, 'support': 29334.0}
No log 8.0 328 0.5864 {'precision': 0.5921815889029004, 'recall': 0.5509150633505396, 'f1-score': 0.5708034520481342, 'support': 4262.0} {'precision': 0.7887952404561229, 'recall': 0.7348729792147806, 'f1-score': 0.7608799617407939, 'support': 2165.0} {'precision': 0.909240754094983, 'recall': 0.8944061613295501, 'f1-score': 0.9017624521072796, 'support': 9868.0} {'precision': 0.868889703187981, 'recall': 0.909272183449651, 'f1-score': 0.888622395442962, 'support': 13039.0} 0.8393 {'precision': 0.7897768216604968, 'recall': 0.7723665968361304, 'f1-score': 0.7805170653347924, 'support': 29334.0} {'precision': 0.8363489544136171, 'recall': 0.8393331969727961, 'f1-score': 0.8374380828176649, 'support': 29334.0}
No log 9.0 369 0.6258 {'precision': 0.5400439384861194, 'recall': 0.6344439230408259, 'f1-score': 0.5834502103786816, 'support': 4262.0} {'precision': 0.7136109918419923, 'recall': 0.7676674364896073, 'f1-score': 0.739652870493992, 'support': 2165.0} {'precision': 0.9208710651142734, 'recall': 0.8656262667207134, 'f1-score': 0.8923944839114083, 'support': 9868.0} {'precision': 0.8900330136770948, 'recall': 0.86839481555334, 'f1-score': 0.8790807810255813, 'support': 13039.0} 0.8260 {'precision': 0.76613975227987, 'recall': 0.7840331104511216, 'f1-score': 0.7736445864524157, 'support': 29334.0} {'precision': 0.8365354605252965, 'recall': 0.8260380445898957, 'f1-score': 0.8303162314135053, 'support': 29334.0}
No log 10.0 410 0.6433 {'precision': 0.5887546468401487, 'recall': 0.5945565462224308, 'f1-score': 0.5916413728694839, 'support': 4262.0} {'precision': 0.765103914934751, 'recall': 0.7311778290993072, 'f1-score': 0.747756258856873, 'support': 2165.0} {'precision': 0.9102390147166266, 'recall': 0.8837657073368463, 'f1-score': 0.8968070337806571, 'support': 9868.0} {'precision': 0.878101644245142, 'recall': 0.9010660326712171, 'f1-score': 0.8894356334456263, 'support': 13039.0} 0.8382 {'precision': 0.7855498051841671, 'recall': 0.7776415288324503, 'f1-score': 0.78141007473816, 'support': 29334.0} {'precision': 0.8385330407446147, 'recall': 0.8381741324060816, 'f1-score': 0.8381915478775454, 'support': 29334.0}
No log 11.0 451 0.6916 {'precision': 0.5963211533681332, 'recall': 0.5628812763960582, 'f1-score': 0.5791188895594448, 'support': 4262.0} {'precision': 0.7905679513184585, 'recall': 0.7200923787528868, 'f1-score': 0.7536862460720328, 'support': 2165.0} {'precision': 0.9027949034114262, 'recall': 0.8903526550466153, 'f1-score': 0.896530612244898, 'support': 9868.0} {'precision': 0.8699933857573308, 'recall': 0.9078917094869239, 'f1-score': 0.888538617428507, 'support': 13039.0} 0.8380 {'precision': 0.7899193484638372, 'recall': 0.770304504920621, 'f1-score': 0.7794685913262207, 'support': 29334.0} {'precision': 0.8354034306270279, 'recall': 0.838003681734506, 'f1-score': 0.836318079509486, 'support': 29334.0}
No log 12.0 492 0.6997 {'precision': 0.5914396887159533, 'recall': 0.5706241201313937, 'f1-score': 0.5808454740864581, 'support': 4262.0} {'precision': 0.797138477261114, 'recall': 0.7205542725173211, 'f1-score': 0.7569141193595342, 'support': 2165.0} {'precision': 0.9003264639869415, 'recall': 0.8943048236724767, 'f1-score': 0.8973055414336554, 'support': 9868.0} {'precision': 0.870236945703038, 'recall': 0.8985351637395506, 'f1-score': 0.8841596860614294, 'support': 13039.0} 0.8363 {'precision': 0.7897853939167616, 'recall': 0.7710045950151856, 'f1-score': 0.7798062052352693, 'support': 29334.0} {'precision': 0.8344570068256206, 'recall': 0.8363332651530647, 'f1-score': 0.8351214191174802, 'support': 29334.0}
0.2673 13.0 533 0.7149 {'precision': 0.5794110827730118, 'recall': 0.5863444392304082, 'f1-score': 0.582857142857143, 'support': 4262.0} {'precision': 0.7928753180661577, 'recall': 0.7196304849884526, 'f1-score': 0.7544794188861985, 'support': 2165.0} {'precision': 0.9063998332291016, 'recall': 0.8812322659100121, 'f1-score': 0.8936388860343233, 'support': 9868.0} {'precision': 0.8717872530084683, 'recall': 0.9000690236981364, 'f1-score': 0.8857024263235349, 'support': 13039.0} 0.8348 {'precision': 0.7876183717691848, 'recall': 0.7718190534567524, 'f1-score': 0.7791694685252999, 'support': 29334.0} {'precision': 0.8351269054569442, 'recall': 0.834833299243199, 'f1-score': 0.8346862872081897, 'support': 29334.0}
0.2673 14.0 574 0.7156 {'precision': 0.5767102058888642, 'recall': 0.6112153918348193, 'f1-score': 0.5934616698940653, 'support': 4262.0} {'precision': 0.7826520438683948, 'recall': 0.7251732101616628, 'f1-score': 0.7528170702469431, 'support': 2165.0} {'precision': 0.9055462885738115, 'recall': 0.8802188893392785, 'f1-score': 0.8927029804727646, 'support': 9868.0} {'precision': 0.8822149935698615, 'recall': 0.894393741851369, 'f1-score': 0.8882626247238937, 'support': 13039.0} 0.8360 {'precision': 0.786780882975233, 'recall': 0.7777503082967825, 'f1-score': 0.7818110863344166, 'support': 29334.0} {'precision': 0.8383279692260589, 'recall': 0.8359923638099134, 'f1-score': 0.8369275233262845, 'support': 29334.0}
0.2673 15.0 615 0.7311 {'precision': 0.5837887067395264, 'recall': 0.6015954950727358, 'f1-score': 0.5925583545181419, 'support': 4262.0} {'precision': 0.7695631301008161, 'recall': 0.7404157043879908, 'f1-score': 0.754708097928437, 'support': 2165.0} {'precision': 0.9121858097359211, 'recall': 0.8716051884880421, 'f1-score': 0.8914339016427424, 'support': 9868.0} {'precision': 0.8736411020104244, 'recall': 0.8998389447043484, 'f1-score': 0.8865465261249008, 'support': 13039.0} 0.8352 {'precision': 0.784794687146672, 'recall': 0.7783638331632793, 'f1-score': 0.7813117200535555, 'support': 29334.0} {'precision': 0.8368128296304671, 'recall': 0.8352423808549806, 'f1-score': 0.8357461183106479, 'support': 29334.0}
0.2673 16.0 656 0.7323 {'precision': 0.5949216896060749, 'recall': 0.5882214922571563, 'f1-score': 0.5915526191599811, 'support': 4262.0} {'precision': 0.8048654244306418, 'recall': 0.7182448036951501, 'f1-score': 0.7590920185501586, 'support': 2165.0} {'precision': 0.9059811340313051, 'recall': 0.8856911228212404, 'f1-score': 0.8957212400717396, 'support': 9868.0} {'precision': 0.8721660143268591, 'recall': 0.9057443055449037, 'f1-score': 0.8886380737396539, 'support': 13039.0} 0.8390 {'precision': 0.7944835655987202, 'recall': 0.7744754310796127, 'f1-score': 0.7837509878803832, 'support': 29334.0} {'precision': 0.8382929152663212, 'recall': 0.8390263857639599, 'f1-score': 0.8382955111317995, 'support': 29334.0}

Framework versions

  • Transformers 4.37.2
  • Pytorch 2.2.0+cu121
  • Datasets 2.17.0
  • Tokenizers 0.15.2