longformer-simple / meta_data /README_s42_e14.md
Theoreticallyhugo's picture
Training in progress, epoch 1
55a8b23 verified
|
raw
history blame
No virus
14.6 kB
metadata
license: apache-2.0
base_model: allenai/longformer-base-4096
tags:
  - generated_from_trainer
datasets:
  - essays_su_g
metrics:
  - accuracy
model-index:
  - name: longformer-simple
    results:
      - task:
          name: Token Classification
          type: token-classification
        dataset:
          name: essays_su_g
          type: essays_su_g
          config: simple
          split: train[80%:100%]
          args: simple
        metrics:
          - name: Accuracy
            type: accuracy
            value: 0.8449618016582787

longformer-simple

This model is a fine-tuned version of allenai/longformer-base-4096 on the essays_su_g dataset. It achieves the following results on the evaluation set:

  • Loss: 0.6109
  • Claim: {'precision': 0.6048215551878988, 'recall': 0.6139635316698656, 'f1-score': 0.6093582569353494, 'support': 4168.0}
  • Majorclaim: {'precision': 0.7849462365591398, 'recall': 0.8141263940520446, 'f1-score': 0.7992700729927007, 'support': 2152.0}
  • O: {'precision': 0.931049822064057, 'recall': 0.9074355083459787, 'f1-score': 0.9190910088923043, 'support': 9226.0}
  • Premise: {'precision': 0.8758632028937849, 'recall': 0.8824650045556199, 'f1-score': 0.8791517101951561, 'support': 12073.0}
  • Accuracy: 0.8450
  • Macro avg: {'precision': 0.7991702041762201, 'recall': 0.8044976096558771, 'f1-score': 0.8017177622538776, 'support': 27619.0}
  • Weighted avg: {'precision': 0.8463109688981529, 'recall': 0.8449618016582787, 'f1-score': 0.8455543885446015, 'support': 27619.0}

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 14

Training results

Training Loss Epoch Step Validation Loss Claim Majorclaim O Premise Accuracy Macro avg Weighted avg
No log 1.0 41 0.5692 {'precision': 0.4959266802443992, 'recall': 0.2336852207293666, 'f1-score': 0.3176777560339204, 'support': 4168.0} {'precision': 0.5318267419962335, 'recall': 0.6561338289962825, 'f1-score': 0.5874765966299147, 'support': 2152.0} {'precision': 0.9195207551736657, 'recall': 0.8235421634511164, 'f1-score': 0.8688890159528846, 'support': 9226.0} {'precision': 0.7774309560968989, 'recall': 0.9489770562411993, 'f1-score': 0.8546810891458411, 'support': 12073.0} 0.7763 {'precision': 0.6811762833777993, 'recall': 0.6655845673544912, 'f1-score': 0.6571811144406402, 'support': 27619.0} {'precision': 0.7632765839539684, 'recall': 0.7763134074369094, 'f1-score': 0.7575678110552883, 'support': 27619.0}
No log 2.0 82 0.4421 {'precision': 0.6073619631901841, 'recall': 0.45129558541266795, 'f1-score': 0.5178251892635926, 'support': 4168.0} {'precision': 0.7044500419815282, 'recall': 0.7797397769516728, 'f1-score': 0.7401852668725187, 'support': 2152.0} {'precision': 0.9249115599680475, 'recall': 0.8784955560372859, 'f1-score': 0.9011062315859693, 'support': 9226.0} {'precision': 0.8318756073858115, 'recall': 0.9217261658245672, 'f1-score': 0.874499017681729, 'support': 12073.0} 0.8252 {'precision': 0.7671497931313928, 'recall': 0.7578142710565485, 'f1-score': 0.7584039263509523, 'support': 27619.0} {'precision': 0.8191436841723105, 'recall': 0.8252290090155328, 'f1-score': 0.8190957969602078, 'support': 27619.0}
No log 3.0 123 0.4286 {'precision': 0.5590650663297536, 'recall': 0.636996161228407, 'f1-score': 0.5954917573174835, 'support': 4168.0} {'precision': 0.7408256880733946, 'recall': 0.7504646840148699, 'f1-score': 0.7456140350877194, 'support': 2152.0} {'precision': 0.949893137022085, 'recall': 0.8671146759158899, 'f1-score': 0.9066183136899364, 'support': 9226.0} {'precision': 0.8706390609716336, 'recall': 0.8847013998177752, 'f1-score': 0.877613902469085, 'support': 12073.0} 0.8310 {'precision': 0.7801057380992167, 'recall': 0.7848192302442354, 'f1-score': 0.7813345021410562, 'support': 27619.0} {'precision': 0.839978983398119, 'recall': 0.8309859154929577, 'f1-score': 0.834442385843827, 'support': 27619.0}
No log 4.0 164 0.4205 {'precision': 0.6486063263388663, 'recall': 0.4968809980806142, 'f1-score': 0.5626952859665806, 'support': 4168.0} {'precision': 0.7857142857142857, 'recall': 0.7973977695167286, 'f1-score': 0.7915129151291513, 'support': 2152.0} {'precision': 0.9162780609478365, 'recall': 0.9157814871016692, 'f1-score': 0.9160297067273812, 'support': 9226.0} {'precision': 0.8506259119883266, 'recall': 0.9174190342085645, 'f1-score': 0.8827608193193592, 'support': 12073.0} 0.8441 {'precision': 0.8003061462473288, 'recall': 0.7818698222268942, 'f1-score': 0.788249681785618, 'support': 27619.0} {'precision': 0.8370120691110231, 'recall': 0.8440566276838408, 'f1-score': 0.8384630577202681, 'support': 27619.0}
No log 5.0 205 0.4555 {'precision': 0.5530616263043906, 'recall': 0.6739443378119002, 'f1-score': 0.6075483940737536, 'support': 4168.0} {'precision': 0.7023945267958951, 'recall': 0.8587360594795539, 'f1-score': 0.7727367760819569, 'support': 2152.0} {'precision': 0.931986531986532, 'recall': 0.9000650336006937, 'f1-score': 0.9157476841640935, 'support': 9226.0} {'precision': 0.9083553050277298, 'recall': 0.8275490764515862, 'f1-score': 0.8660714285714286, 'support': 12073.0} 0.8310 {'precision': 0.773949497528637, 'recall': 0.8150736268359334, 'f1-score': 0.7905260707228081, 'support': 27619.0} {'precision': 0.8465837004167056, 'recall': 0.8310221224519353, 'f1-score': 0.83637929468368, 'support': 27619.0}
No log 6.0 246 0.4529 {'precision': 0.5858436907520539, 'recall': 0.6672264875239923, 'f1-score': 0.6238923163208077, 'support': 4168.0} {'precision': 0.779319606087735, 'recall': 0.8090148698884758, 'f1-score': 0.7938896488828089, 'support': 2152.0} {'precision': 0.9153763440860215, 'recall': 0.9227184045089963, 'f1-score': 0.9190327107848429, 'support': 9226.0} {'precision': 0.9016581407655672, 'recall': 0.8467655098152903, 'f1-score': 0.8733501345521336, 'support': 12073.0} 0.8421 {'precision': 0.7955494454228443, 'recall': 0.8114313179341887, 'f1-score': 0.8025412026351483, 'support': 27619.0} {'precision': 0.8490485962328721, 'recall': 0.842101451899055, 'f1-score': 0.8447730063713313, 'support': 27619.0}
No log 7.0 287 0.5058 {'precision': 0.6189119908857875, 'recall': 0.5213531669865643, 'f1-score': 0.565959109259018, 'support': 4168.0} {'precision': 0.8097773475314618, 'recall': 0.7774163568773235, 'f1-score': 0.7932669511616881, 'support': 2152.0} {'precision': 0.9237928391547137, 'recall': 0.9144808150877953, 'f1-score': 0.9191132414619532, 'support': 9226.0} {'precision': 0.8518862808893021, 'recall': 0.9108755073304067, 'f1-score': 0.8803938835961892, 'support': 12073.0} 0.8429 {'precision': 0.8010921146153162, 'recall': 0.7810314615705225, 'f1-score': 0.7896832963697121, 'support': 27619.0} {'precision': 0.8374670275215469, 'recall': 0.8428980049965603, 'f1-score': 0.8390876631549409, 'support': 27619.0}
No log 8.0 328 0.5503 {'precision': 0.579874797124971, 'recall': 0.6000479846449136, 'f1-score': 0.5897889399834925, 'support': 4168.0} {'precision': 0.7640207075064711, 'recall': 0.8229553903345725, 'f1-score': 0.7923937360178971, 'support': 2152.0} {'precision': 0.9484103877955048, 'recall': 0.882722740082376, 'f1-score': 0.9143883680458093, 'support': 9226.0} {'precision': 0.8650108862188534, 'recall': 0.8885115547088545, 'f1-score': 0.8766037427474054, 'support': 12073.0} 0.8379 {'precision': 0.7893291946614501, 'recall': 0.7985594174426791, 'f1-score': 0.793293696698651, 'support': 27619.0} {'precision': 0.8419711569605107, 'recall': 0.8379376516166407, 'f1-score': 0.8393807050053141, 'support': 27619.0}
No log 9.0 369 0.5586 {'precision': 0.5553219950315307, 'recall': 0.6972168905950096, 'f1-score': 0.6182321029677693, 'support': 4168.0} {'precision': 0.762192490289167, 'recall': 0.820631970260223, 'f1-score': 0.7903334079212352, 'support': 2152.0} {'precision': 0.9405251141552512, 'recall': 0.8930197268588771, 'f1-score': 0.9161570110085622, 'support': 9226.0} {'precision': 0.8953046246352463, 'recall': 0.8386482233082084, 'f1-score': 0.8660508083140878, 'support': 12073.0} 0.8341 {'precision': 0.7883360560277988, 'recall': 0.8123792027555796, 'f1-score': 0.7976933325529135, 'support': 27619.0} {'precision': 0.8487315887907376, 'recall': 0.8340635070060466, 'f1-score': 0.8394903831187637, 'support': 27619.0}
No log 10.0 410 0.5841 {'precision': 0.5999538319482918, 'recall': 0.6235604606525912, 'f1-score': 0.611529411764706, 'support': 4168.0} {'precision': 0.7489643744821872, 'recall': 0.8401486988847584, 'f1-score': 0.791940429259746, 'support': 2152.0} {'precision': 0.9401190748797802, 'recall': 0.8899848254931715, 'f1-score': 0.9143652561247216, 'support': 9226.0} {'precision': 0.8760194414696433, 'recall': 0.8808084154725421, 'f1-score': 0.8784074012886173, 'support': 12073.0} 0.8419 {'precision': 0.7912641806949756, 'recall': 0.8086256001257658, 'f1-score': 0.7990606246094477, 'support': 27619.0} {'precision': 0.8458706038288859, 'recall': 0.8418842101451899, 'f1-score': 0.8434069590052654, 'support': 27619.0}
No log 11.0 451 0.5806 {'precision': 0.5889105479748754, 'recall': 0.6523512476007678, 'f1-score': 0.6190096755833807, 'support': 4168.0} {'precision': 0.7572354211663067, 'recall': 0.8145910780669146, 'f1-score': 0.7848668009850012, 'support': 2152.0} {'precision': 0.9387871202639663, 'recall': 0.8943203988727509, 'f1-score': 0.91601443241743, 'support': 9226.0} {'precision': 0.8838460245419398, 'recall': 0.8710345398823822, 'f1-score': 0.8773935171665763, 'support': 12073.0} 0.8414 {'precision': 0.792194778486772, 'recall': 0.8080743161057039, 'f1-score': 0.7993211065380971, 'support': 27619.0} {'precision': 0.8478247878691975, 'recall': 0.8414135196784822, 'f1-score': 0.8440923556170222, 'support': 27619.0}
No log 12.0 492 0.6116 {'precision': 0.600094540297802, 'recall': 0.6091650671785028, 'f1-score': 0.6045957852125253, 'support': 4168.0} {'precision': 0.7798594847775175, 'recall': 0.7736988847583643, 'f1-score': 0.7767669699090273, 'support': 2152.0} {'precision': 0.9319225170753554, 'recall': 0.9021244309559939, 'f1-score': 0.9167814066200364, 'support': 9226.0} {'precision': 0.8723421522480117, 'recall': 0.8903338027002402, 'f1-score': 0.8812461569993852, 'support': 12073.0} 0.8428 {'precision': 0.7960546735996716, 'recall': 0.7938305463982752, 'f1-score': 0.7948475796852434, 'support': 27619.0} {'precision': 0.8439536406759814, 'recall': 0.8427531771606502, 'f1-score': 0.843226324738045, 'support': 27619.0}
0.2736 13.0 533 0.6151 {'precision': 0.5982355746304244, 'recall': 0.6019673704414588, 'f1-score': 0.6000956708921311, 'support': 4168.0} {'precision': 0.7789473684210526, 'recall': 0.7908921933085502, 'f1-score': 0.7848743370993774, 'support': 2152.0} {'precision': 0.9333632488220777, 'recall': 0.9017992629525254, 'f1-score': 0.9173098125689085, 'support': 9226.0} {'precision': 0.8696251825409703, 'recall': 0.8878489190756232, 'f1-score': 0.8786425673183327, 'support': 12073.0} 0.8418 {'precision': 0.7950428436036313, 'recall': 0.7956269364445394, 'f1-score': 0.7952305969696873, 'support': 27619.0} {'precision': 0.8428956433741749, 'recall': 0.8418117962272349, 'f1-score': 0.8422173277711444, 'support': 27619.0}
0.2736 14.0 574 0.6109 {'precision': 0.6048215551878988, 'recall': 0.6139635316698656, 'f1-score': 0.6093582569353494, 'support': 4168.0} {'precision': 0.7849462365591398, 'recall': 0.8141263940520446, 'f1-score': 0.7992700729927007, 'support': 2152.0} {'precision': 0.931049822064057, 'recall': 0.9074355083459787, 'f1-score': 0.9190910088923043, 'support': 9226.0} {'precision': 0.8758632028937849, 'recall': 0.8824650045556199, 'f1-score': 0.8791517101951561, 'support': 12073.0} 0.8450 {'precision': 0.7991702041762201, 'recall': 0.8044976096558771, 'f1-score': 0.8017177622538776, 'support': 27619.0} {'precision': 0.8463109688981529, 'recall': 0.8449618016582787, 'f1-score': 0.8455543885446015, 'support': 27619.0}

Framework versions

  • Transformers 4.37.2
  • Pytorch 2.2.0+cu121
  • Datasets 2.17.0
  • Tokenizers 0.15.2