Theoreticallyhugo commited on
Commit
55a8b23
1 Parent(s): 141f598

Training in progress, epoch 1

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. README.md +28 -20
  2. meta_data/README_s42_e10.md +90 -0
  3. meta_data/README_s42_e11.md +91 -0
  4. meta_data/README_s42_e12.md +92 -0
  5. meta_data/README_s42_e13.md +93 -0
  6. meta_data/README_s42_e14.md +94 -0
  7. meta_data/README_s42_e15.md +95 -0
  8. meta_data/README_s42_e4.md +16 -17
  9. meta_data/README_s42_e5.md +17 -18
  10. meta_data/README_s42_e6.md +18 -19
  11. meta_data/README_s42_e7.md +19 -19
  12. meta_data/README_s42_e8.md +88 -0
  13. meta_data/README_s42_e9.md +89 -0
  14. meta_data/meta_s42_e10_cvi0.json +1 -0
  15. meta_data/meta_s42_e10_cvi1.json +1 -0
  16. meta_data/meta_s42_e10_cvi2.json +1 -0
  17. meta_data/meta_s42_e10_cvi3.json +1 -0
  18. meta_data/meta_s42_e10_cvi4.json +1 -0
  19. meta_data/meta_s42_e11_cvi0.json +1 -0
  20. meta_data/meta_s42_e11_cvi1.json +1 -0
  21. meta_data/meta_s42_e11_cvi2.json +1 -0
  22. meta_data/meta_s42_e11_cvi3.json +1 -0
  23. meta_data/meta_s42_e11_cvi4.json +1 -0
  24. meta_data/meta_s42_e12_cvi0.json +1 -0
  25. meta_data/meta_s42_e12_cvi1.json +1 -0
  26. meta_data/meta_s42_e12_cvi2.json +1 -0
  27. meta_data/meta_s42_e12_cvi3.json +1 -0
  28. meta_data/meta_s42_e12_cvi4.json +1 -0
  29. meta_data/meta_s42_e13_cvi0.json +1 -0
  30. meta_data/meta_s42_e13_cvi1.json +1 -0
  31. meta_data/meta_s42_e13_cvi2.json +1 -0
  32. meta_data/meta_s42_e13_cvi3.json +1 -0
  33. meta_data/meta_s42_e13_cvi4.json +1 -0
  34. meta_data/meta_s42_e14_cvi0.json +1 -0
  35. meta_data/meta_s42_e14_cvi1.json +1 -0
  36. meta_data/meta_s42_e14_cvi2.json +1 -0
  37. meta_data/meta_s42_e14_cvi3.json +1 -0
  38. meta_data/meta_s42_e14_cvi4.json +1 -0
  39. meta_data/meta_s42_e15_cvi0.json +1 -0
  40. meta_data/meta_s42_e15_cvi1.json +1 -0
  41. meta_data/meta_s42_e15_cvi2.json +1 -0
  42. meta_data/meta_s42_e15_cvi3.json +1 -0
  43. meta_data/meta_s42_e15_cvi4.json +1 -0
  44. meta_data/meta_s42_e16_cvi0.json +1 -0
  45. meta_data/meta_s42_e4_cvi0.json +1 -0
  46. meta_data/meta_s42_e4_cvi1.json +1 -0
  47. meta_data/meta_s42_e4_cvi2.json +1 -0
  48. meta_data/meta_s42_e4_cvi3.json +1 -0
  49. meta_data/meta_s42_e4_cvi4.json +1 -0
  50. meta_data/meta_s42_e5_cvi0.json +1 -0
README.md CHANGED
@@ -17,12 +17,12 @@ model-index:
17
  name: essays_su_g
18
  type: essays_su_g
19
  config: simple
20
- split: test
21
  args: simple
22
  metrics:
23
  - name: Accuracy
24
  type: accuracy
25
- value: 0.8374001218245011
26
  ---
27
 
28
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -32,14 +32,14 @@ should probably proofread and complete it, then remove this comment. -->
32
 
33
  This model is a fine-tuned version of [allenai/longformer-base-4096](https://huggingface.co/allenai/longformer-base-4096) on the essays_su_g dataset.
34
  It achieves the following results on the evaluation set:
35
- - Loss: 0.4624
36
- - Claim: {'precision': 0.5906025179856115, 'recall': 0.617826904985889, 'f1-score': 0.6039080459770114, 'support': 4252.0}
37
- - Majorclaim: {'precision': 0.7631810193321616, 'recall': 0.7960586617781852, 'f1-score': 0.7792732166890982, 'support': 2182.0}
38
- - O: {'precision': 0.9296403841858387, 'recall': 0.897466307277628, 'f1-score': 0.913270064183444, 'support': 9275.0}
39
- - Premise: {'precision': 0.8734363502575423, 'recall': 0.875655737704918, 'f1-score': 0.8745446359133887, 'support': 12200.0}
40
- - Accuracy: 0.8374
41
- - Macro avg: {'precision': 0.7892150679402886, 'recall': 0.7967519029366551, 'f1-score': 0.7927489906907357, 'support': 27909.0}
42
- - Weighted avg: {'precision': 0.8404042039171331, 'recall': 0.8374001218245011, 'f1-score': 0.8387335832080923, 'support': 27909.0}
43
 
44
  ## Model description
45
 
@@ -64,19 +64,27 @@ The following hyperparameters were used during training:
64
  - seed: 42
65
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
66
  - lr_scheduler_type: linear
67
- - num_epochs: 7
68
 
69
  ### Training results
70
 
71
- | Training Loss | Epoch | Step | Validation Loss | Claim | Majorclaim | O | Premise | Accuracy | Macro avg | Weighted avg |
72
- |:-------------:|:-----:|:----:|:---------------:|:------------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------------------:|:------------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------------------:|:--------:|:-------------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------------------:|
73
- | No log | 1.0 | 41 | 0.5869 | {'precision': 0.50130958617077, 'recall': 0.2250705550329257, 'f1-score': 0.310663853270573, 'support': 4252.0} | {'precision': 0.6411985018726591, 'recall': 0.3923006416131989, 'f1-score': 0.48677850440716514, 'support': 2182.0} | {'precision': 0.8172767203513909, 'recall': 0.9027493261455526, 'f1-score': 0.8578893442622951, 'support': 9275.0} | {'precision': 0.7879334257975035, 'recall': 0.931311475409836, 'f1-score': 0.8536438767843726, 'support': 12200.0} | 0.7721 | {'precision': 0.6869295585480809, 'recall': 0.6128579995503783, 'f1-score': 0.6272438946811014, 'support': 27909.0} | {'precision': 0.7425451598936884, 'recall': 0.7720806908165825, 'f1-score': 0.7436480119504476, 'support': 27909.0} |
74
- | No log | 2.0 | 82 | 0.4605 | {'precision': 0.5800683670786222, 'recall': 0.5188146754468486, 'f1-score': 0.5477343265052762, 'support': 4252.0} | {'precision': 0.679080824088748, 'recall': 0.7855178735105408, 'f1-score': 0.7284317892052699, 'support': 2182.0} | {'precision': 0.9250369696280286, 'recall': 0.8767654986522911, 'f1-score': 0.900254621941769, 'support': 9275.0} | {'precision': 0.8497380970995231, 'recall': 0.8909016393442623, 'f1-score': 0.8698331399303749, 'support': 12200.0} | 0.8213 | {'precision': 0.7584810644737304, 'recall': 0.7679999217384857, 'f1-score': 0.7615634693956725, 'support': 27909.0} | {'precision': 0.8203349361458346, 'recall': 0.8212762908022502, 'f1-score': 0.8198154876923865, 'support': 27909.0} |
75
- | No log | 3.0 | 123 | 0.4587 | {'precision': 0.6081277213352685, 'recall': 0.39416745061147695, 'f1-score': 0.478310502283105, 'support': 4252.0} | {'precision': 0.7005473025801408, 'recall': 0.8212648945921174, 'f1-score': 0.7561181434599156, 'support': 2182.0} | {'precision': 0.9445551517993201, 'recall': 0.8687870619946092, 'f1-score': 0.905088172526115, 'support': 9275.0} | {'precision': 0.8125, 'recall': 0.9366393442622951, 'f1-score': 0.8701644837039293, 'support': 12200.0} | 0.8224 | {'precision': 0.7664325439286823, 'recall': 0.7552146878651247, 'f1-score': 0.7524203254932662, 'support': 27909.0} | {'precision': 0.8164965537384401, 'recall': 0.8224228743416102, 'f1-score': 0.8131543783763286, 'support': 27909.0} |
76
- | No log | 4.0 | 164 | 0.4491 | {'precision': 0.5829145728643216, 'recall': 0.6274694261523989, 'f1-score': 0.6043719560539133, 'support': 4252.0} | {'precision': 0.7112758486149044, 'recall': 0.8354720439963337, 'f1-score': 0.7683877766069548, 'support': 2182.0} | {'precision': 0.9357652656621729, 'recall': 0.8905660377358491, 'f1-score': 0.9126063418406806, 'support': 9275.0} | {'precision': 0.881426896667225, 'recall': 0.8627868852459016, 'f1-score': 0.8720072901996521, 'support': 12200.0} | 0.8340 | {'precision': 0.7778456459521561, 'recall': 0.8040735982826209, 'f1-score': 0.7893433411753001, 'support': 27909.0} | {'precision': 0.8407032729174679, 'recall': 0.8340320326776308, 'f1-score': 0.8366234708053201, 'support': 27909.0} |
77
- | No log | 5.0 | 205 | 0.4611 | {'precision': 0.5805860805860806, 'recall': 0.5964252116650988, 'f1-score': 0.588399071925754, 'support': 4252.0} | {'precision': 0.7489102005231038, 'recall': 0.7873510540788268, 'f1-score': 0.7676496872207329, 'support': 2182.0} | {'precision': 0.9323308270676691, 'recall': 0.8957412398921832, 'f1-score': 0.9136698559331353, 'support': 9275.0} | {'precision': 0.8673800259403373, 'recall': 0.8770491803278688, 'f1-score': 0.8721878056732963, 'support': 12200.0} | 0.8335 | {'precision': 0.7823017835292977, 'recall': 0.7891416714909945, 'f1-score': 0.7854766051882296, 'support': 27909.0} | {'precision': 0.8360091300196414, 'recall': 0.8334945716435559, 'f1-score': 0.8345646069131102, 'support': 27909.0} |
78
- | No log | 6.0 | 246 | 0.4642 | {'precision': 0.5962333486449242, 'recall': 0.6105362182502352, 'f1-score': 0.6033000232396003, 'support': 4252.0} | {'precision': 0.7385488447507094, 'recall': 0.8350137488542622, 'f1-score': 0.783824478382448, 'support': 2182.0} | {'precision': 0.9409678526484384, 'recall': 0.8867924528301887, 'f1-score': 0.9130772646536413, 'support': 9275.0} | {'precision': 0.8715477443913501, 'recall': 0.8820491803278688, 'f1-score': 0.8767670183729173, 'support': 12200.0} | 0.8386 | {'precision': 0.7868244476088555, 'recall': 0.8035979000656387, 'f1-score': 0.7942421961621517, 'support': 27909.0} | {'precision': 0.8422751475356695, 'recall': 0.8385825360994661, 'f1-score': 0.8399041873394746, 'support': 27909.0} |
79
- | No log | 7.0 | 287 | 0.4624 | {'precision': 0.5906025179856115, 'recall': 0.617826904985889, 'f1-score': 0.6039080459770114, 'support': 4252.0} | {'precision': 0.7631810193321616, 'recall': 0.7960586617781852, 'f1-score': 0.7792732166890982, 'support': 2182.0} | {'precision': 0.9296403841858387, 'recall': 0.897466307277628, 'f1-score': 0.913270064183444, 'support': 9275.0} | {'precision': 0.8734363502575423, 'recall': 0.875655737704918, 'f1-score': 0.8745446359133887, 'support': 12200.0} | 0.8374 | {'precision': 0.7892150679402886, 'recall': 0.7967519029366551, 'f1-score': 0.7927489906907357, 'support': 27909.0} | {'precision': 0.8404042039171331, 'recall': 0.8374001218245011, 'f1-score': 0.8387335832080923, 'support': 27909.0} |
 
 
 
 
 
 
 
 
80
 
81
 
82
  ### Framework versions
 
17
  name: essays_su_g
18
  type: essays_su_g
19
  config: simple
20
+ split: train[80%:100%]
21
  args: simple
22
  metrics:
23
  - name: Accuracy
24
  type: accuracy
25
+ value: 0.8436583511350881
26
  ---
27
 
28
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
32
 
33
  This model is a fine-tuned version of [allenai/longformer-base-4096](https://huggingface.co/allenai/longformer-base-4096) on the essays_su_g dataset.
34
  It achieves the following results on the evaluation set:
35
+ - Loss: 0.6438
36
+ - Claim: {'precision': 0.6039084842707341, 'recall': 0.6079654510556622, 'f1-score': 0.6059301769488283, 'support': 4168.0}
37
+ - Majorclaim: {'precision': 0.7791218637992832, 'recall': 0.8080855018587361, 'f1-score': 0.7933394160583942, 'support': 2152.0}
38
+ - O: {'precision': 0.936604624929498, 'recall': 0.8999566442662043, 'f1-score': 0.9179149853518324, 'support': 9226.0}
39
+ - Premise: {'precision': 0.8701119584617881, 'recall': 0.8883458958005467, 'f1-score': 0.8791343907537195, 'support': 12073.0}
40
+ - Accuracy: 0.8437
41
+ - Macro avg: {'precision': 0.7974367328653259, 'recall': 0.8010883732452874, 'f1-score': 0.7990797422781937, 'support': 27619.0}
42
+ - Weighted avg: {'precision': 0.8450608913228282, 'recall': 0.8436583511350881, 'f1-score': 0.8441745376482147, 'support': 27619.0}
43
 
44
  ## Model description
45
 
 
64
  - seed: 42
65
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
66
  - lr_scheduler_type: linear
67
+ - num_epochs: 15
68
 
69
  ### Training results
70
 
71
+ | Training Loss | Epoch | Step | Validation Loss | Claim | Majorclaim | O | Premise | Accuracy | Macro avg | Weighted avg |
72
+ |:-------------:|:-----:|:----:|:---------------:|:------------------------------------------------------------------------------------------------------------------:|:------------------------------------------------------------------------------------------------------------------:|:------------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------------------:|:--------:|:-------------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------------------:|
73
+ | No log | 1.0 | 41 | 0.5691 | {'precision': 0.4949392712550607, 'recall': 0.23464491362763915, 'f1-score': 0.318359375, 'support': 4168.0} | {'precision': 0.5329815303430079, 'recall': 0.6570631970260223, 'f1-score': 0.5885535900104059, 'support': 2152.0} | {'precision': 0.919937015503876, 'recall': 0.8232169954476479, 'f1-score': 0.86889371925409, 'support': 9226.0} | {'precision': 0.7775213791231166, 'recall': 0.9488942267870455, 'f1-score': 0.8547021300406611, 'support': 12073.0} | 0.7764 | {'precision': 0.6813447990562653, 'recall': 0.6659548332220888, 'f1-score': 0.6576272035762892, 'support': 27619.0} | {'precision': 0.7633961277048913, 'recall': 0.7763858213548644, 'f1-score': 0.7577653597350205, 'support': 27619.0} |
74
+ | No log | 2.0 | 82 | 0.4424 | {'precision': 0.6061801446416831, 'recall': 0.44241842610364684, 'f1-score': 0.511511789181692, 'support': 4168.0} | {'precision': 0.6986357999173212, 'recall': 0.7853159851301115, 'f1-score': 0.7394443229052724, 'support': 2152.0} | {'precision': 0.9271515569343904, 'recall': 0.8745935399956645, 'f1-score': 0.900105973562385, 'support': 9226.0} | {'precision': 0.8290598290598291, 'recall': 0.9239625610867225, 'f1-score': 0.8739423378251332, 'support': 12073.0} | 0.8240 | {'precision': 0.765256832638306, 'recall': 0.7565726280790362, 'f1-score': 0.7562511058686207, 'support': 27619.0} | {'precision': 0.818029713776915, 'recall': 0.8239979724102973, 'f1-score': 0.8175078343477619, 'support': 27619.0} |
75
+ | No log | 3.0 | 123 | 0.4282 | {'precision': 0.5578358208955224, 'recall': 0.6456333973128598, 'f1-score': 0.598532028469751, 'support': 4168.0} | {'precision': 0.7531854648419065, 'recall': 0.741635687732342, 'f1-score': 0.74736595645048, 'support': 2152.0} | {'precision': 0.9484389782403028, 'recall': 0.8692824626056797, 'f1-score': 0.9071372016740188, 'support': 9226.0} | {'precision': 0.872013093289689, 'recall': 0.8826306634639278, 'f1-score': 0.8772897542501955, 'support': 12073.0} | 0.8314 | {'precision': 0.7828683393168552, 'recall': 0.7847955527787023, 'f1-score': 0.7825812352111113, 'support': 27619.0} | {'precision': 0.8408713896362565, 'recall': 0.831420399000688, 'f1-score': 0.835069338449997, 'support': 27619.0} |
76
+ | No log | 4.0 | 164 | 0.4201 | {'precision': 0.6498756218905473, 'recall': 0.5014395393474088, 'f1-score': 0.566088840736728, 'support': 4168.0} | {'precision': 0.781447963800905, 'recall': 0.8025092936802974, 'f1-score': 0.7918386061439706, 'support': 2152.0} | {'precision': 0.9166757197175448, 'recall': 0.9145892044222849, 'f1-score': 0.9156312733980794, 'support': 9226.0} | {'precision': 0.8523252232830305, 'recall': 0.9169220574836412, 'f1-score': 0.8834443956745541, 'support': 12073.0} | 0.8445 | {'precision': 0.8000811321730068, 'recall': 0.7838650237334082, 'f1-score': 0.7892507789883332, 'support': 27619.0} | {'precision': 0.8377468489427367, 'recall': 0.8445273181505485, 'f1-score': 0.839166272709442, 'support': 27619.0} |
77
+ | No log | 5.0 | 205 | 0.4511 | {'precision': 0.5708701913186587, 'recall': 0.6657869481765835, 'f1-score': 0.614686011739949, 'support': 4168.0} | {'precision': 0.7117988394584139, 'recall': 0.8550185873605948, 'f1-score': 0.7768629934557737, 'support': 2152.0} | {'precision': 0.9312506998096518, 'recall': 0.9014740949490571, 'f1-score': 0.916120504488627, 'support': 9226.0} | {'precision': 0.9057107276285359, 'recall': 0.8433695021949805, 'f1-score': 0.8734291228822647, 'support': 12073.0} | 0.8369 | {'precision': 0.7799076145538151, 'recall': 0.816412283170304, 'f1-score': 0.7952746581416535, 'support': 27619.0} | {'precision': 0.8486021445756123, 'recall': 0.8368876498062928, 'f1-score': 0.8411187238429554, 'support': 27619.0} |
78
+ | No log | 6.0 | 246 | 0.4568 | {'precision': 0.5821080969144751, 'recall': 0.6744241842610365, 'f1-score': 0.6248749583194397, 'support': 4168.0} | {'precision': 0.7789237668161435, 'recall': 0.8071561338289963, 'f1-score': 0.7927886809675947, 'support': 2152.0} | {'precision': 0.9134450171821306, 'recall': 0.9219596791675699, 'f1-score': 0.9176825979070017, 'support': 9226.0} | {'precision': 0.9038051209103841, 'recall': 0.8420442309285182, 'f1-score': 0.8718322541915012, 'support': 12073.0} | 0.8407 | {'precision': 0.7945705004557834, 'recall': 0.8113960570465302, 'f1-score': 0.8017946228463844, 'support': 27619.0} | {'precision': 0.8487473640392945, 'recall': 0.8407255874579094, 'f1-score': 0.8437210080329368, 'support': 27619.0} |
79
+ | No log | 7.0 | 287 | 0.5084 | {'precision': 0.6148536720044174, 'recall': 0.5343090211132437, 'f1-score': 0.5717586649550707, 'support': 4168.0} | {'precision': 0.8070429329474192, 'recall': 0.7774163568773235, 'f1-score': 0.7919526627218936, 'support': 2152.0} | {'precision': 0.9237677984665936, 'recall': 0.914155647084327, 'f1-score': 0.9189365874918283, 'support': 9226.0} | {'precision': 0.8554009692043145, 'recall': 0.9064855462602501, 'f1-score': 0.8802026782482808, 'support': 12073.0} | 0.8428 | {'precision': 0.8002663431556862, 'recall': 0.7830916428337862, 'f1-score': 0.7907126483542682, 'support': 27619.0} | {'precision': 0.838169524837023, 'recall': 0.8428255910786053, 'f1-score': 0.8397178803143253, 'support': 27619.0} |
80
+ | No log | 8.0 | 328 | 0.5501 | {'precision': 0.5789353438428148, 'recall': 0.6079654510556622, 'f1-score': 0.5930953774136923, 'support': 4168.0} | {'precision': 0.7649107531562909, 'recall': 0.8164498141263941, 'f1-score': 0.7898404135760846, 'support': 2152.0} | {'precision': 0.9487657196087564, 'recall': 0.8831562974203339, 'f1-score': 0.9147861232738297, 'support': 9226.0} | {'precision': 0.8670389253054949, 'recall': 0.8874347718048539, 'f1-score': 0.8771182971756039, 'support': 12073.0} | 0.8383 | {'precision': 0.7899126854783393, 'recall': 0.7987515836018111, 'f1-score': 0.7937100528598027, 'support': 27619.0} | {'precision': 0.8429039403400853, 'recall': 0.8382997212064158, 'f1-score': 0.8400385270357877, 'support': 27619.0} |
81
+ | No log | 9.0 | 369 | 0.5615 | {'precision': 0.5539168741620379, 'recall': 0.6938579654510557, 'f1-score': 0.6160400468633508, 'support': 4168.0} | {'precision': 0.7715914072775099, 'recall': 0.8178438661710037, 'f1-score': 0.7940446650124069, 'support': 2152.0} | {'precision': 0.9404937990670156, 'recall': 0.8959462388900932, 'f1-score': 0.9176797113516515, 'support': 9226.0} | {'precision': 0.8944209039548022, 'recall': 0.8392280294872857, 'f1-score': 0.865945899747874, 'support': 12073.0} | 0.8346 | {'precision': 0.7901057461153415, 'recall': 0.8117190249998596, 'f1-score': 0.7984275807438208, 'support': 27619.0} | {'precision': 0.8488551216049527, 'recall': 0.8345704044317318, 'f1-score': 0.8399115427430235, 'support': 27619.0} |
82
+ | No log | 10.0 | 410 | 0.5889 | {'precision': 0.5963951631302761, 'recall': 0.6271593090211133, 'f1-score': 0.6113904806455386, 'support': 4168.0} | {'precision': 0.7538461538461538, 'recall': 0.8424721189591078, 'f1-score': 0.7956989247311828, 'support': 2152.0} | {'precision': 0.9409945004582951, 'recall': 0.8902016041621504, 'f1-score': 0.9148936170212766, 'support': 9226.0} | {'precision': 0.8769726514087416, 'recall': 0.8791518263894641, 'f1-score': 0.8780608868299139, 'support': 12073.0} | 0.8420 | {'precision': 0.7920521172108667, 'recall': 0.8097462146329589, 'f1-score': 0.800010977306978, 'support': 27619.0} | {'precision': 0.8464230437267781, 'recall': 0.841956624063145, 'f1-score': 0.8437038707660653, 'support': 27619.0} |
83
+ | No log | 11.0 | 451 | 0.5894 | {'precision': 0.5867732872271451, 'recall': 0.6513915547024952, 'f1-score': 0.6173962478681069, 'support': 4168.0} | {'precision': 0.7666963490650045, 'recall': 0.800185873605948, 'f1-score': 0.7830832196452934, 'support': 2152.0} | {'precision': 0.9389980688401681, 'recall': 0.8959462388900932, 'f1-score': 0.9169671085473403, 'support': 9226.0} | {'precision': 0.8817717491417567, 'recall': 0.8722769816946906, 'f1-score': 0.8769986675549633, 'support': 12073.0} | 0.8412 | {'precision': 0.7935598635685186, 'recall': 0.8049501622233067, 'f1-score': 0.798611310903926, 'support': 27619.0} | {'precision': 0.8474031686468898, 'recall': 0.8412324848835946, 'f1-score': 0.8438555380947815, 'support': 27619.0} |
84
+ | No log | 12.0 | 492 | 0.6198 | {'precision': 0.5958633511503603, 'recall': 0.6151631477927063, 'f1-score': 0.6053594616928344, 'support': 4168.0} | {'precision': 0.7789770061004223, 'recall': 0.7713754646840149, 'f1-score': 0.7751575998132151, 'support': 2152.0} | {'precision': 0.9328919313208394, 'recall': 0.901040537611099, 'f1-score': 0.9166896399625074, 'support': 9226.0} | {'precision': 0.8742056379338439, 'recall': 0.8887600430713162, 'f1-score': 0.8814227625580152, 'support': 12073.0} | 0.8424 | {'precision': 0.7954844816263664, 'recall': 0.7940847982897842, 'f1-score': 0.7946573660066429, 'support': 27619.0} | {'precision': 0.8443847565032829, 'recall': 0.8424273145298526, 'f1-score': 0.8432627184833189, 'support': 27619.0} |
85
+ | 0.271 | 13.0 | 533 | 0.6308 | {'precision': 0.5984138428262437, 'recall': 0.5974088291746641, 'f1-score': 0.597910913675111, 'support': 4168.0} | {'precision': 0.7893231649189705, 'recall': 0.7695167286245354, 'f1-score': 0.779294117647059, 'support': 2152.0} | {'precision': 0.9218476357267951, 'recall': 0.9128549750704531, 'f1-score': 0.917329266964383, 'support': 9226.0} | {'precision': 0.8714005235602095, 'recall': 0.8822993456473122, 'f1-score': 0.8768160678273037, 'support': 12073.0} | 0.8407 | {'precision': 0.7952462917580546, 'recall': 0.7905199696292412, 'f1-score': 0.7928375915284641, 'support': 27619.0} | {'precision': 0.8406603119578272, 'recall': 0.8407255874579094, 'f1-score': 0.8406609157922722, 'support': 27619.0} |
86
+ | 0.271 | 14.0 | 574 | 0.6361 | {'precision': 0.6123370110330993, 'recall': 0.5858925143953935, 'f1-score': 0.5988229524276607, 'support': 4168.0} | {'precision': 0.7828622700762674, 'recall': 0.8108736059479554, 'f1-score': 0.7966217758502625, 'support': 2152.0} | {'precision': 0.9273249392533687, 'recall': 0.9100368523737264, 'f1-score': 0.9185995623632386, 'support': 9226.0} | {'precision': 0.8696145124716553, 'recall': 0.8894226787045474, 'f1-score': 0.879407067687646, 'support': 12073.0} | 0.8444 | {'precision': 0.7980346832085977, 'recall': 0.7990564128554057, 'f1-score': 0.798362839582202, 'support': 27619.0} | {'precision': 0.8433070048087172, 'recall': 0.8443824903146385, 'f1-score': 0.8437056091062111, 'support': 27619.0} |
87
+ | 0.271 | 15.0 | 615 | 0.6438 | {'precision': 0.6039084842707341, 'recall': 0.6079654510556622, 'f1-score': 0.6059301769488283, 'support': 4168.0} | {'precision': 0.7791218637992832, 'recall': 0.8080855018587361, 'f1-score': 0.7933394160583942, 'support': 2152.0} | {'precision': 0.936604624929498, 'recall': 0.8999566442662043, 'f1-score': 0.9179149853518324, 'support': 9226.0} | {'precision': 0.8701119584617881, 'recall': 0.8883458958005467, 'f1-score': 0.8791343907537195, 'support': 12073.0} | 0.8437 | {'precision': 0.7974367328653259, 'recall': 0.8010883732452874, 'f1-score': 0.7990797422781937, 'support': 27619.0} | {'precision': 0.8450608913228282, 'recall': 0.8436583511350881, 'f1-score': 0.8441745376482147, 'support': 27619.0} |
88
 
89
 
90
  ### Framework versions
meta_data/README_s42_e10.md ADDED
@@ -0,0 +1,90 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ base_model: allenai/longformer-base-4096
4
+ tags:
5
+ - generated_from_trainer
6
+ datasets:
7
+ - essays_su_g
8
+ metrics:
9
+ - accuracy
10
+ model-index:
11
+ - name: longformer-simple
12
+ results:
13
+ - task:
14
+ name: Token Classification
15
+ type: token-classification
16
+ dataset:
17
+ name: essays_su_g
18
+ type: essays_su_g
19
+ config: simple
20
+ split: train[80%:100%]
21
+ args: simple
22
+ metrics:
23
+ - name: Accuracy
24
+ type: accuracy
25
+ value: 0.8427169702016728
26
+ ---
27
+
28
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
29
+ should probably proofread and complete it, then remove this comment. -->
30
+
31
+ # longformer-simple
32
+
33
+ This model is a fine-tuned version of [allenai/longformer-base-4096](https://huggingface.co/allenai/longformer-base-4096) on the essays_su_g dataset.
34
+ It achieves the following results on the evaluation set:
35
+ - Loss: 0.5229
36
+ - Claim: {'precision': 0.5976262508727019, 'recall': 0.6161228406909789, 'f1-score': 0.6067336089781453, 'support': 4168.0}
37
+ - Majorclaim: {'precision': 0.7766384306732055, 'recall': 0.8094795539033457, 'f1-score': 0.7927189988623435, 'support': 2152.0}
38
+ - O: {'precision': 0.9332209106239461, 'recall': 0.8997398655972252, 'f1-score': 0.9161746040505491, 'support': 9226.0}
39
+ - Premise: {'precision': 0.8752462245567958, 'recall': 0.8832932990971589, 'f1-score': 0.8792513501257369, 'support': 12073.0}
40
+ - Accuracy: 0.8427
41
+ - Macro avg: {'precision': 0.7956829541816623, 'recall': 0.8021588898221772, 'f1-score': 0.7987196405041936, 'support': 27619.0}
42
+ - Weighted avg: {'precision': 0.8450333432396857, 'recall': 0.8427169702016728, 'f1-score': 0.8437172024624736, 'support': 27619.0}
43
+
44
+ ## Model description
45
+
46
+ More information needed
47
+
48
+ ## Intended uses & limitations
49
+
50
+ More information needed
51
+
52
+ ## Training and evaluation data
53
+
54
+ More information needed
55
+
56
+ ## Training procedure
57
+
58
+ ### Training hyperparameters
59
+
60
+ The following hyperparameters were used during training:
61
+ - learning_rate: 2e-05
62
+ - train_batch_size: 8
63
+ - eval_batch_size: 8
64
+ - seed: 42
65
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
66
+ - lr_scheduler_type: linear
67
+ - num_epochs: 10
68
+
69
+ ### Training results
70
+
71
+ | Training Loss | Epoch | Step | Validation Loss | Claim | Majorclaim | O | Premise | Accuracy | Macro avg | Weighted avg |
72
+ |:-------------:|:-----:|:----:|:---------------:|:-------------------------------------------------------------------------------------------------------------------:|:------------------------------------------------------------------------------------------------------------------:|:------------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------------------:|:--------:|:-------------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------------------:|
73
+ | No log | 1.0 | 41 | 0.5686 | {'precision': 0.5015495867768595, 'recall': 0.2329654510556622, 'f1-score': 0.31815203145478377, 'support': 4168.0} | {'precision': 0.5294117647058824, 'recall': 0.6565985130111525, 'f1-score': 0.5861854387056629, 'support': 2152.0} | {'precision': 0.9169171576289528, 'recall': 0.826577064816822, 'f1-score': 0.8694066009234452, 'support': 9226.0} | {'precision': 0.778798394230115, 'recall': 0.9480659322455065, 'f1-score': 0.8551363466567052, 'support': 12073.0} | 0.7769 | {'precision': 0.6816692258354524, 'recall': 0.6660517402822859, 'f1-score': 0.6572201044351493, 'support': 27619.0} | {'precision': 0.7636649952988127, 'recall': 0.7768565118215721, 'f1-score': 0.7579106826642613, 'support': 27619.0} |
74
+ | No log | 2.0 | 82 | 0.4450 | {'precision': 0.5915697674418605, 'recall': 0.4882437619961612, 'f1-score': 0.5349631966351209, 'support': 4168.0} | {'precision': 0.7189862160960427, 'recall': 0.7513940520446096, 'f1-score': 0.7348329925017042, 'support': 2152.0} | {'precision': 0.9161246916348957, 'recall': 0.8855408627791025, 'f1-score': 0.900573192239859, 'support': 9226.0} | {'precision': 0.8421457116507839, 'recall': 0.9076451586184047, 'f1-score': 0.873669523619693, 'support': 12073.0} | 0.8248 | {'precision': 0.7672065967058957, 'recall': 0.7582059588595695, 'f1-score': 0.7610097262490942, 'support': 27619.0} | {'precision': 0.8194472178398864, 'recall': 0.8247945255078026, 'f1-score': 0.8207244155727703, 'support': 27619.0} |
75
+ | No log | 3.0 | 123 | 0.4291 | {'precision': 0.5668412662263721, 'recall': 0.597168905950096, 'f1-score': 0.5816100011683608, 'support': 4168.0} | {'precision': 0.7096088435374149, 'recall': 0.7755576208178439, 'f1-score': 0.7411190053285968, 'support': 2152.0} | {'precision': 0.9522893882946761, 'recall': 0.858877086494689, 'f1-score': 0.9031743317946088, 'support': 9226.0} | {'precision': 0.8618876941457586, 'recall': 0.8962975233993208, 'f1-score': 0.878755887607601, 'support': 12073.0} | 0.8292 | {'precision': 0.7726567980510554, 'recall': 0.7819752841654874, 'f1-score': 0.7761648064747918, 'support': 27619.0} | {'precision': 0.8356951611844187, 'recall': 0.829247981462037, 'f1-score': 0.8313459864788912, 'support': 27619.0} |
76
+ | No log | 4.0 | 164 | 0.4224 | {'precision': 0.635280095351609, 'recall': 0.5115163147792706, 'f1-score': 0.5667198298777245, 'support': 4168.0} | {'precision': 0.7965432098765433, 'recall': 0.7495353159851301, 'f1-score': 0.7723246349054346, 'support': 2152.0} | {'precision': 0.9076593465452598, 'recall': 0.9183828311294169, 'f1-score': 0.9129896018533484, 'support': 9226.0} | {'precision': 0.8526699217236302, 'recall': 0.9112896546011762, 'f1-score': 0.8810057655349135, 'support': 12073.0} | 0.8407 | {'precision': 0.7980381433742605, 'recall': 0.7726810291237485, 'f1-score': 0.7832599580428552, 'support': 27619.0} | {'precision': 0.8338592100103474, 'recall': 0.8407255874579094, 'f1-score': 0.8357925898565789, 'support': 27619.0} |
77
+ | No log | 5.0 | 205 | 0.4366 | {'precision': 0.5841626085115392, 'recall': 0.6619481765834933, 'f1-score': 0.6206276009447756, 'support': 4168.0} | {'precision': 0.7325534489713594, 'recall': 0.8438661710037175, 'f1-score': 0.7842798531634635, 'support': 2152.0} | {'precision': 0.9277765412864456, 'recall': 0.9036418816388467, 'f1-score': 0.9155501866900944, 'support': 9226.0} | {'precision': 0.8999212667308197, 'recall': 0.8520665948811398, 'f1-score': 0.8753403675970047, 'support': 12073.0} | 0.8400 | {'precision': 0.786103466375041, 'recall': 0.8153807060267992, 'f1-score': 0.7989495020988346, 'support': 27619.0} | {'precision': 0.848534001868728, 'recall': 0.8399652413193816, 'f1-score': 0.8432382188039772, 'support': 27619.0} |
78
+ | No log | 6.0 | 246 | 0.4462 | {'precision': 0.6034715960324617, 'recall': 0.642274472168906, 'f1-score': 0.6222687122268712, 'support': 4168.0} | {'precision': 0.7745405647691618, 'recall': 0.8029739776951673, 'f1-score': 0.7885010266940452, 'support': 2152.0} | {'precision': 0.9244103126714207, 'recall': 0.913288532408411, 'f1-score': 0.9188157679515838, 'support': 9226.0} | {'precision': 0.8895835093351356, 'recall': 0.8721941522405368, 'f1-score': 0.8808030112923464, 'support': 12073.0} | 0.8458 | {'precision': 0.7980014957020449, 'recall': 0.8076827836282554, 'f1-score': 0.8025971295412117, 'support': 27619.0} | {'precision': 0.8490760766340619, 'recall': 0.8458307686737391, 'f1-score': 0.8472935020261775, 'support': 27619.0} |
79
+ | No log | 7.0 | 287 | 0.4800 | {'precision': 0.6023660067600193, 'recall': 0.5986084452975048, 'f1-score': 0.6004813477737665, 'support': 4168.0} | {'precision': 0.7868324125230203, 'recall': 0.7941449814126395, 'f1-score': 0.7904717853839038, 'support': 2152.0} | {'precision': 0.9340485601355166, 'recall': 0.8964881855625406, 'f1-score': 0.9148830263812843, 'support': 9226.0} | {'precision': 0.8673895582329317, 'recall': 0.894475275407935, 'f1-score': 0.8807242180809852, 'support': 12073.0} | 0.8427 | {'precision': 0.797659134412872, 'recall': 0.7959292219201549, 'f1-score': 0.796640094404985, 'support': 27619.0} | {'precision': 0.8433850255361078, 'recall': 0.8426807632426953, 'f1-score': 0.8428109571654543, 'support': 27619.0} |
80
+ | No log | 8.0 | 328 | 0.5120 | {'precision': 0.5682210708117443, 'recall': 0.6314779270633397, 'f1-score': 0.5981818181818181, 'support': 4168.0} | {'precision': 0.7462057335581788, 'recall': 0.8224907063197026, 'f1-score': 0.7824933687002653, 'support': 2152.0} | {'precision': 0.9381360777587193, 'recall': 0.8892261001517451, 'f1-score': 0.9130265427633409, 'support': 9226.0} | {'precision': 0.8802864363942713, 'recall': 0.865484966454071, 'f1-score': 0.8728229545169779, 'support': 12073.0} | 0.8348 | {'precision': 0.7832123296307284, 'recall': 0.8021699249972145, 'f1-score': 0.7916311710406005, 'support': 27619.0} | {'precision': 0.8420696535627841, 'recall': 0.8347514392266193, 'f1-score': 0.8377682740520238, 'support': 27619.0} |
81
+ | No log | 9.0 | 369 | 0.5199 | {'precision': 0.6004990925589837, 'recall': 0.6350767754318618, 'f1-score': 0.617304104477612, 'support': 4168.0} | {'precision': 0.7618432385874246, 'recall': 0.8220260223048327, 'f1-score': 0.7907912382655341, 'support': 2152.0} | {'precision': 0.9352794749943426, 'recall': 0.8959462388900932, 'f1-score': 0.9151904340124003, 'support': 9226.0} | {'precision': 0.88083976433491, 'recall': 0.879234655843618, 'f1-score': 0.8800364781959874, 'support': 12073.0} | 0.8435 | {'precision': 0.7946153926189152, 'recall': 0.8080709231176013, 'f1-score': 0.8008305637378834, 'support': 27619.0} | {'precision': 0.8474468220550765, 'recall': 0.8435135232991781, 'f1-score': 0.8451766391856577, 'support': 27619.0} |
82
+ | No log | 10.0 | 410 | 0.5229 | {'precision': 0.5976262508727019, 'recall': 0.6161228406909789, 'f1-score': 0.6067336089781453, 'support': 4168.0} | {'precision': 0.7766384306732055, 'recall': 0.8094795539033457, 'f1-score': 0.7927189988623435, 'support': 2152.0} | {'precision': 0.9332209106239461, 'recall': 0.8997398655972252, 'f1-score': 0.9161746040505491, 'support': 9226.0} | {'precision': 0.8752462245567958, 'recall': 0.8832932990971589, 'f1-score': 0.8792513501257369, 'support': 12073.0} | 0.8427 | {'precision': 0.7956829541816623, 'recall': 0.8021588898221772, 'f1-score': 0.7987196405041936, 'support': 27619.0} | {'precision': 0.8450333432396857, 'recall': 0.8427169702016728, 'f1-score': 0.8437172024624736, 'support': 27619.0} |
83
+
84
+
85
+ ### Framework versions
86
+
87
+ - Transformers 4.37.2
88
+ - Pytorch 2.2.0+cu121
89
+ - Datasets 2.17.0
90
+ - Tokenizers 0.15.2
meta_data/README_s42_e11.md ADDED
@@ -0,0 +1,91 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ base_model: allenai/longformer-base-4096
4
+ tags:
5
+ - generated_from_trainer
6
+ datasets:
7
+ - essays_su_g
8
+ metrics:
9
+ - accuracy
10
+ model-index:
11
+ - name: longformer-simple
12
+ results:
13
+ - task:
14
+ name: Token Classification
15
+ type: token-classification
16
+ dataset:
17
+ name: essays_su_g
18
+ type: essays_su_g
19
+ config: simple
20
+ split: train[80%:100%]
21
+ args: simple
22
+ metrics:
23
+ - name: Accuracy
24
+ type: accuracy
25
+ value: 0.8439117998479307
26
+ ---
27
+
28
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
29
+ should probably proofread and complete it, then remove this comment. -->
30
+
31
+ # longformer-simple
32
+
33
+ This model is a fine-tuned version of [allenai/longformer-base-4096](https://huggingface.co/allenai/longformer-base-4096) on the essays_su_g dataset.
34
+ It achieves the following results on the evaluation set:
35
+ - Loss: 0.5465
36
+ - Claim: {'precision': 0.6066587395957194, 'recall': 0.6120441458733206, 'f1-score': 0.609339543771647, 'support': 4168.0}
37
+ - Majorclaim: {'precision': 0.7760070827799912, 'recall': 0.8145910780669146, 'f1-score': 0.7948311040580367, 'support': 2152.0}
38
+ - O: {'precision': 0.9332207207207207, 'recall': 0.8982224149143724, 'f1-score': 0.9153871644758642, 'support': 9226.0}
39
+ - Premise: {'precision': 0.8730753564154786, 'recall': 0.8876832601673155, 'f1-score': 0.8803187120091999, 'support': 12073.0}
40
+ - Accuracy: 0.8439
41
+ - Macro avg: {'precision': 0.7972404748779774, 'recall': 0.8031352247554808, 'f1-score': 0.799969131078687, 'support': 27619.0}
42
+ - Weighted avg: {'precision': 0.8453982409265701, 'recall': 0.8439117998479307, 'f1-score': 0.8444785670702963, 'support': 27619.0}
43
+
44
+ ## Model description
45
+
46
+ More information needed
47
+
48
+ ## Intended uses & limitations
49
+
50
+ More information needed
51
+
52
+ ## Training and evaluation data
53
+
54
+ More information needed
55
+
56
+ ## Training procedure
57
+
58
+ ### Training hyperparameters
59
+
60
+ The following hyperparameters were used during training:
61
+ - learning_rate: 2e-05
62
+ - train_batch_size: 8
63
+ - eval_batch_size: 8
64
+ - seed: 42
65
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
66
+ - lr_scheduler_type: linear
67
+ - num_epochs: 11
68
+
69
+ ### Training results
70
+
71
+ | Training Loss | Epoch | Step | Validation Loss | Claim | Majorclaim | O | Premise | Accuracy | Macro avg | Weighted avg |
72
+ |:-------------:|:-----:|:----:|:---------------:|:---------------------------------------------------------------------------------------------------------------------:|:------------------------------------------------------------------------------------------------------------------:|:------------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------------------:|:--------:|:-------------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------------------:|
73
+ | No log | 1.0 | 41 | 0.5694 | {'precision': 0.49896907216494846, 'recall': 0.23224568138195778, 'f1-score': 0.31696136214800263, 'support': 4168.0} | {'precision': 0.5306275836151823, 'recall': 0.6561338289962825, 'f1-score': 0.5867442343652607, 'support': 2152.0} | {'precision': 0.9175605640592985, 'recall': 0.8251680034684588, 'f1-score': 0.8689151401015808, 'support': 9226.0} | {'precision': 0.7781400720059779, 'recall': 0.9488113973328915, 'f1-score': 0.8550421736209599, 'support': 12073.0} | 0.7766 | {'precision': 0.6813243229613518, 'recall': 0.6655897277948977, 'f1-score': 0.656915727558951, 'support': 27619.0} | {'precision': 0.7632974584909895, 'recall': 0.776566856149752, 'f1-score': 0.7575692021611915, 'support': 27619.0} |
74
+ | No log | 2.0 | 82 | 0.4421 | {'precision': 0.6022099447513812, 'recall': 0.47072936660268716, 'f1-score': 0.5284136816590359, 'support': 4168.0} | {'precision': 0.7127335940895263, 'recall': 0.7620817843866171, 'f1-score': 0.7365820794969683, 'support': 2152.0} | {'precision': 0.9157366071428571, 'recall': 0.8893344894862345, 'f1-score': 0.902342461233916, 'support': 9226.0} | {'precision': 0.8403053435114504, 'recall': 0.9117866313260996, 'f1-score': 0.8745878520637191, 'support': 12073.0} | 0.8261 | {'precision': 0.7677463723738037, 'recall': 0.7584830679504097, 'f1-score': 0.7604815186134098, 'support': 27619.0} | {'precision': 0.8196316337998537, 'recall': 0.8260617690720157, 'f1-score': 0.820864750553667, 'support': 27619.0} |
75
+ | No log | 3.0 | 123 | 0.4321 | {'precision': 0.5622799295774648, 'recall': 0.613003838771593, 'f1-score': 0.5865472910927456, 'support': 4168.0} | {'precision': 0.7040472175379426, 'recall': 0.7760223048327137, 'f1-score': 0.7382847038019452, 'support': 2152.0} | {'precision': 0.9531174480425326, 'recall': 0.8549750704530674, 'f1-score': 0.9013826991201006, 'support': 9226.0} | {'precision': 0.865615192725517, 'recall': 0.8909964383334714, 'f1-score': 0.8781224489795918, 'support': 12073.0} | 0.8281 | {'precision': 0.7712649469708642, 'recall': 0.7837494130977114, 'f1-score': 0.7760842857485957, 'support': 27619.0} | {'precision': 0.836479458200373, 'recall': 0.828053151815779, 'f1-score': 0.8309948550081105, 'support': 27619.0} |
76
+ | No log | 4.0 | 164 | 0.4226 | {'precision': 0.6349646044936904, 'recall': 0.4949616122840691, 'f1-score': 0.5562896049615748, 'support': 4168.0} | {'precision': 0.7841284837033538, 'recall': 0.7713754646840149, 'f1-score': 0.777699695479035, 'support': 2152.0} | {'precision': 0.9162947643409165, 'recall': 0.9124214177324951, 'f1-score': 0.91435398902949, 'support': 9226.0} | {'precision': 0.8469309658656053, 'recall': 0.9165907396670256, 'f1-score': 0.8803850590715622, 'support': 12073.0} | 0.8403 | {'precision': 0.7955797046008914, 'recall': 0.7738373085919011, 'f1-score': 0.7821820871354155, 'support': 27619.0} | {'precision': 0.833220247480505, 'recall': 0.8402548969912017, 'f1-score': 0.8348218088673657, 'support': 27619.0} |
77
+ | No log | 5.0 | 205 | 0.4429 | {'precision': 0.5707901685963843, 'recall': 0.6741842610364683, 'f1-score': 0.6181938180618194, 'support': 4168.0} | {'precision': 0.7293650793650793, 'recall': 0.854089219330855, 'f1-score': 0.7868150684931506, 'support': 2152.0} | {'precision': 0.9286749136297783, 'recall': 0.9032083243008888, 'f1-score': 0.9157646024506841, 'support': 9226.0} | {'precision': 0.9048469160046416, 'recall': 0.8396421767580552, 'f1-score': 0.871025949475855, 'support': 12073.0} | 0.8370 | {'precision': 0.7834192693989709, 'recall': 0.8177809953565668, 'f1-score': 0.7979498596203773, 'support': 27619.0} | {'precision': 0.8487207590273272, 'recall': 0.8370324776422028, 'f1-score': 0.8412541500891029, 'support': 27619.0} |
78
+ | No log | 6.0 | 246 | 0.4469 | {'precision': 0.6041713388652165, 'recall': 0.6463531669865643, 'f1-score': 0.6245508287933232, 'support': 4168.0} | {'precision': 0.7802893309222423, 'recall': 0.8020446096654275, 'f1-score': 0.7910174152153987, 'support': 2152.0} | {'precision': 0.9200608232866297, 'recall': 0.9181660524604379, 'f1-score': 0.9191124613464982, 'support': 9226.0} | {'precision': 0.8927689293927263, 'recall': 0.8682183384411497, 'f1-score': 0.8803224993701184, 'support': 12073.0} | 0.8463 | {'precision': 0.7993226056167038, 'recall': 0.8086955418883948, 'f1-score': 0.8037508011813346, 'support': 27619.0} | {'precision': 0.8495691089733778, 'recall': 0.8462652521814693, 'f1-score': 0.8477230325222614, 'support': 27619.0} |
79
+ | No log | 7.0 | 287 | 0.4855 | {'precision': 0.6054081121682524, 'recall': 0.5801343570057581, 'f1-score': 0.5925018377848567, 'support': 4168.0} | {'precision': 0.7954866008462623, 'recall': 0.7862453531598513, 'f1-score': 0.7908389810703436, 'support': 2152.0} | {'precision': 0.9300268696820421, 'recall': 0.9003902016041622, 'f1-score': 0.9149686088776298, 'support': 9226.0} | {'precision': 0.8639185102657966, 'recall': 0.8991965542947072, 'f1-score': 0.8812045943423028, 'support': 12073.0} | 0.8426 | {'precision': 0.7987100232405884, 'recall': 0.7914916165161198, 'f1-score': 0.7948785055187831, 'support': 27619.0} | {'precision': 0.8416577084856046, 'recall': 0.8426445562837177, 'f1-score': 0.8418739490984576, 'support': 27619.0} |
80
+ | No log | 8.0 | 328 | 0.5255 | {'precision': 0.5770187740330242, 'recall': 0.6120441458733206, 'f1-score': 0.5940156013505647, 'support': 4168.0} | {'precision': 0.7475521498510004, 'recall': 0.8159851301115242, 'f1-score': 0.7802710508775829, 'support': 2152.0} | {'precision': 0.9392967586759822, 'recall': 0.8889009321482766, 'f1-score': 0.9134042434705129, 'support': 9226.0} | {'precision': 0.8744017164548605, 'recall': 0.8776608962146939, 'f1-score': 0.8760282749782976, 'support': 12073.0} | 0.8365 | {'precision': 0.7845673497537168, 'recall': 0.7986477760869539, 'f1-score': 0.7909297926692396, 'support': 27619.0} | {'precision': 0.841317581916548, 'recall': 0.8365255802165176, 'f1-score': 0.8384936906473678, 'support': 27619.0} |
81
+ | No log | 9.0 | 369 | 0.5319 | {'precision': 0.5885775862068966, 'recall': 0.6552303262955854, 'f1-score': 0.6201180744777476, 'support': 4168.0} | {'precision': 0.7612709317303564, 'recall': 0.8238847583643123, 'f1-score': 0.7913412184780183, 'support': 2152.0} | {'precision': 0.9386587319752804, 'recall': 0.8890093214827661, 'f1-score': 0.9131596526386107, 'support': 9226.0} | {'precision': 0.8836467427803896, 'recall': 0.8718628344239211, 'f1-score': 0.8777152386908486, 'support': 12073.0} | 0.8412 | {'precision': 0.7930384981732308, 'recall': 0.8099968101416463, 'f1-score': 0.8005835460713063, 'support': 27619.0} | {'precision': 0.8479589779204769, 'recall': 0.8411600709656396, 'f1-score': 0.8439511013630612, 'support': 27619.0} |
82
+ | No log | 10.0 | 410 | 0.5409 | {'precision': 0.6034322820037106, 'recall': 0.6242802303262955, 'f1-score': 0.6136792452830189, 'support': 4168.0} | {'precision': 0.7689295039164491, 'recall': 0.8210966542750929, 'f1-score': 0.7941573033707865, 'support': 2152.0} | {'precision': 0.9326761510025765, 'recall': 0.9024495989594624, 'f1-score': 0.9173139425990195, 'support': 9226.0} | {'precision': 0.8804833636815097, 'recall': 0.8811397332891576, 'f1-score': 0.8808114262057546, 'support': 12073.0} | 0.8448 | {'precision': 0.7963803251510615, 'recall': 0.8072415542125021, 'f1-score': 0.8014904793646449, 'support': 27619.0} | {'precision': 0.8474161940220971, 'recall': 0.8448169738223686, 'f1-score': 0.8459399831345878, 'support': 27619.0} |
83
+ | No log | 11.0 | 451 | 0.5465 | {'precision': 0.6066587395957194, 'recall': 0.6120441458733206, 'f1-score': 0.609339543771647, 'support': 4168.0} | {'precision': 0.7760070827799912, 'recall': 0.8145910780669146, 'f1-score': 0.7948311040580367, 'support': 2152.0} | {'precision': 0.9332207207207207, 'recall': 0.8982224149143724, 'f1-score': 0.9153871644758642, 'support': 9226.0} | {'precision': 0.8730753564154786, 'recall': 0.8876832601673155, 'f1-score': 0.8803187120091999, 'support': 12073.0} | 0.8439 | {'precision': 0.7972404748779774, 'recall': 0.8031352247554808, 'f1-score': 0.799969131078687, 'support': 27619.0} | {'precision': 0.8453982409265701, 'recall': 0.8439117998479307, 'f1-score': 0.8444785670702963, 'support': 27619.0} |
84
+
85
+
86
+ ### Framework versions
87
+
88
+ - Transformers 4.37.2
89
+ - Pytorch 2.2.0+cu121
90
+ - Datasets 2.17.0
91
+ - Tokenizers 0.15.2
meta_data/README_s42_e12.md ADDED
@@ -0,0 +1,92 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ base_model: allenai/longformer-base-4096
4
+ tags:
5
+ - generated_from_trainer
6
+ datasets:
7
+ - essays_su_g
8
+ metrics:
9
+ - accuracy
10
+ model-index:
11
+ - name: longformer-simple
12
+ results:
13
+ - task:
14
+ name: Token Classification
15
+ type: token-classification
16
+ dataset:
17
+ name: essays_su_g
18
+ type: essays_su_g
19
+ config: simple
20
+ split: train[80%:100%]
21
+ args: simple
22
+ metrics:
23
+ - name: Accuracy
24
+ type: accuracy
25
+ value: 0.8428617980375829
26
+ ---
27
+
28
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
29
+ should probably proofread and complete it, then remove this comment. -->
30
+
31
+ # longformer-simple
32
+
33
+ This model is a fine-tuned version of [allenai/longformer-base-4096](https://huggingface.co/allenai/longformer-base-4096) on the essays_su_g dataset.
34
+ It achieves the following results on the evaluation set:
35
+ - Loss: 0.5674
36
+ - Claim: {'precision': 0.6009334889148191, 'recall': 0.6178023032629558, 'f1-score': 0.6092511534366496, 'support': 4168.0}
37
+ - Majorclaim: {'precision': 0.7783711615487316, 'recall': 0.8127323420074349, 'f1-score': 0.7951807228915663, 'support': 2152.0}
38
+ - O: {'precision': 0.9334009465855307, 'recall': 0.8977888575764145, 'f1-score': 0.9152486187845303, 'support': 9226.0}
39
+ - Premise: {'precision': 0.8738229755178908, 'recall': 0.8839559347303901, 'f1-score': 0.8788602487029565, 'support': 12073.0}
40
+ - Accuracy: 0.8429
41
+ - Macro avg: {'precision': 0.7966321431417431, 'recall': 0.8030698593942989, 'f1-score': 0.7996351859539257, 'support': 27619.0}
42
+ - Weighted avg: {'precision': 0.8451054505259219, 'recall': 0.8428617980375829, 'f1-score': 0.8438086557327736, 'support': 27619.0}
43
+
44
+ ## Model description
45
+
46
+ More information needed
47
+
48
+ ## Intended uses & limitations
49
+
50
+ More information needed
51
+
52
+ ## Training and evaluation data
53
+
54
+ More information needed
55
+
56
+ ## Training procedure
57
+
58
+ ### Training hyperparameters
59
+
60
+ The following hyperparameters were used during training:
61
+ - learning_rate: 2e-05
62
+ - train_batch_size: 8
63
+ - eval_batch_size: 8
64
+ - seed: 42
65
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
66
+ - lr_scheduler_type: linear
67
+ - num_epochs: 12
68
+
69
+ ### Training results
70
+
71
+ | Training Loss | Epoch | Step | Validation Loss | Claim | Majorclaim | O | Premise | Accuracy | Macro avg | Weighted avg |
72
+ |:-------------:|:-----:|:----:|:---------------:|:------------------------------------------------------------------------------------------------------------------:|:------------------------------------------------------------------------------------------------------------------:|:------------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------------------:|:--------:|:-------------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------------------:|
73
+ | No log | 1.0 | 41 | 0.5695 | {'precision': 0.4966580976863753, 'recall': 0.2317658349328215, 'f1-score': 0.3160477670538197, 'support': 4168.0} | {'precision': 0.5313791807591132, 'recall': 0.6570631970260223, 'f1-score': 0.5875753168501975, 'support': 2152.0} | {'precision': 0.9187296220263254, 'recall': 0.8246260567960113, 'f1-score': 0.8691380590620894, 'support': 9226.0} | {'precision': 0.7775590551181102, 'recall': 0.9488113973328915, 'f1-score': 0.8546912889386308, 'support': 12073.0} | 0.7764 | {'precision': 0.681081488897481, 'recall': 0.6655666215219367, 'f1-score': 0.6568631079761843, 'support': 27619.0} | {'precision': 0.7631438109057622, 'recall': 0.7763858213548644, 'f1-score': 0.7574171707594364, 'support': 27619.0} |
74
+ | No log | 2.0 | 82 | 0.4419 | {'precision': 0.6064395123476086, 'recall': 0.4654510556621881, 'f1-score': 0.5266730012216642, 'support': 4168.0} | {'precision': 0.7129909365558912, 'recall': 0.7676579925650557, 'f1-score': 0.7393152830610875, 'support': 2152.0} | {'precision': 0.9188125491959969, 'recall': 0.885649252113592, 'f1-score': 0.9019261548650588, 'support': 9226.0} | {'precision': 0.8368660105980318, 'recall': 0.9156796156713327, 'f1-score': 0.8745006526124274, 'support': 12073.0} | 0.8262 | {'precision': 0.7687772521743822, 'recall': 0.7586094790030422, 'f1-score': 0.7606037729400594, 'support': 27619.0} | {'precision': 0.8198140522019413, 'recall': 0.8261703899489482, 'f1-score': 0.8206378450347307, 'support': 27619.0} |
75
+ | No log | 3.0 | 123 | 0.4306 | {'precision': 0.5615019421665948, 'recall': 0.6242802303262955, 'f1-score': 0.5912292660758918, 'support': 4168.0} | {'precision': 0.716514954486346, 'recall': 0.7681226765799256, 'f1-score': 0.7414218434626599, 'support': 2152.0} | {'precision': 0.9523123123123123, 'recall': 0.8593106438326469, 'f1-score': 0.9034243063073331, 'support': 9226.0} | {'precision': 0.8682911033756983, 'recall': 0.8884287252547006, 'f1-score': 0.8782444935724228, 'support': 12073.0} | 0.8295 | {'precision': 0.7746550780852379, 'recall': 0.7850355689983921, 'f1-score': 0.778579977354577, 'support': 27619.0} | {'precision': 0.8382342648703133, 'recall': 0.8294652232159021, 'f1-score': 0.8326811908116615, 'support': 27619.0} |
76
+ | No log | 4.0 | 164 | 0.4218 | {'precision': 0.6392779333955805, 'recall': 0.4928023032629559, 'f1-score': 0.5565641511990246, 'support': 4168.0} | {'precision': 0.7853107344632768, 'recall': 0.775092936802974, 'f1-score': 0.7801683816651075, 'support': 2152.0} | {'precision': 0.9142207299902524, 'recall': 0.9149143724257534, 'f1-score': 0.9145674196868736, 'support': 9226.0} | {'precision': 0.8483408690321097, 'recall': 0.9169220574836412, 'f1-score': 0.8812992596130881, 'support': 12073.0} | 0.8412 | {'precision': 0.7967875667203048, 'recall': 0.7749329174938311, 'f1-score': 0.7831498030410234, 'support': 27619.0} | {'precision': 0.8338867769894811, 'recall': 0.8411962779246172, 'f1-score': 0.8355265112741501, 'support': 27619.0} |
77
+ | No log | 5.0 | 205 | 0.4481 | {'precision': 0.5631399317406144, 'recall': 0.6729846449136276, 'f1-score': 0.6131817684992896, 'support': 4168.0} | {'precision': 0.7194693718298869, 'recall': 0.8568773234200744, 'f1-score': 0.782184517497349, 'support': 2152.0} | {'precision': 0.9297755945070895, 'recall': 0.9026663776284414, 'f1-score': 0.9160204586701864, 'support': 9226.0} | {'precision': 0.9064579960424537, 'recall': 0.8347552389629752, 'f1-score': 0.8691302660514854, 'support': 12073.0} | 0.8348 | {'precision': 0.7797107235300111, 'recall': 0.8168208962312796, 'f1-score': 0.7951292526795776, 'support': 27619.0} | {'precision': 0.8478671329452823, 'recall': 0.8347514392266193, 'f1-score': 0.839393792189799, 'support': 27619.0} |
78
+ | No log | 6.0 | 246 | 0.4486 | {'precision': 0.5991150442477876, 'recall': 0.6497120921305183, 'f1-score': 0.6233885819521179, 'support': 4168.0} | {'precision': 0.7814183123877917, 'recall': 0.8090148698884758, 'f1-score': 0.7949771689497717, 'support': 2152.0} | {'precision': 0.917592492719232, 'recall': 0.9220680685020594, 'f1-score': 0.9198248364599665, 'support': 9226.0} | {'precision': 0.8957758620689655, 'recall': 0.860680858113145, 'f1-score': 0.8778777510243738, 'support': 12073.0} | 0.8453 | {'precision': 0.7984754278559442, 'recall': 0.8103689721585496, 'f1-score': 0.8040170845965575, 'support': 27619.0} | {'precision': 0.8493839035906282, 'recall': 0.8453238712480539, 'f1-score': 0.8470254718292933, 'support': 27619.0} |
79
+ | No log | 7.0 | 287 | 0.4938 | {'precision': 0.6172607879924953, 'recall': 0.5525431861804223, 'f1-score': 0.5831117863020636, 'support': 4168.0} | {'precision': 0.8030447193149381, 'recall': 0.7843866171003717, 'f1-score': 0.7936060178655383, 'support': 2152.0} | {'precision': 0.925684628975265, 'recall': 0.9086277910253631, 'f1-score': 0.9170769062465813, 'support': 9226.0} | {'precision': 0.8585231736056559, 'recall': 0.9052431044479416, 'f1-score': 0.8812643631818732, 'support': 12073.0} | 0.8437 | {'precision': 0.8011283274720886, 'recall': 0.7877001746885246, 'f1-score': 0.7937647683990142, 'support': 27619.0} | {'precision': 0.840226360917678, 'recall': 0.8437307650530432, 'f1-score': 0.8414028845895708, 'support': 27619.0} |
80
+ | No log | 8.0 | 328 | 0.5387 | {'precision': 0.5799445471349353, 'recall': 0.6022072936660269, 'f1-score': 0.5908662900188324, 'support': 4168.0} | {'precision': 0.7539513028620248, 'recall': 0.8201672862453532, 'f1-score': 0.7856665924771868, 'support': 2152.0} | {'precision': 0.9451071221771858, 'recall': 0.8845653587686971, 'f1-score': 0.9138346117238675, 'support': 9226.0} | {'precision': 0.8677222898903776, 'recall': 0.8851155470885447, 'f1-score': 0.8763326226012793, 'support': 12073.0} | 0.8372 | {'precision': 0.7866813155161309, 'recall': 0.7980138714421554, 'f1-score': 0.7916750292052914, 'support': 27619.0} | {'precision': 0.8412788874061601, 'recall': 0.8371773054781129, 'f1-score': 0.8387156335942302, 'support': 27619.0} |
81
+ | No log | 9.0 | 369 | 0.5456 | {'precision': 0.5713418336369156, 'recall': 0.6773032629558541, 'f1-score': 0.6198265451751016, 'support': 4168.0} | {'precision': 0.7620881471972615, 'recall': 0.8276022304832714, 'f1-score': 0.7934952105145913, 'support': 2152.0} | {'precision': 0.9396749084249084, 'recall': 0.8897680468241925, 'f1-score': 0.9140407527001447, 'support': 9226.0} | {'precision': 0.8894442050840156, 'recall': 0.8549656257765261, 'f1-score': 0.8718641777177126, 'support': 12073.0} | 0.8376 | {'precision': 0.7906372735857753, 'recall': 0.8124097915099611, 'f1-score': 0.7998066715268876, 'support': 27619.0} | {'precision': 0.848295269505583, 'recall': 0.8376479959448206, 'f1-score': 0.8418116128503821, 'support': 27619.0} |
82
+ | No log | 10.0 | 410 | 0.5597 | {'precision': 0.6104051786142412, 'recall': 0.6108445297504799, 'f1-score': 0.610624775152896, 'support': 4168.0} | {'precision': 0.7580782312925171, 'recall': 0.8285315985130112, 'f1-score': 0.7917406749555951, 'support': 2152.0} | {'precision': 0.9353891336270191, 'recall': 0.8975720789074355, 'f1-score': 0.9160904917307374, 'support': 9226.0} | {'precision': 0.876255819652046, 'recall': 0.8885943841630084, 'f1-score': 0.8823819707188683, 'support': 12073.0} | 0.8450 | {'precision': 0.7950320907964559, 'recall': 0.8063856478334838, 'f1-score': 0.8002094781395241, 'support': 27619.0} | {'precision': 0.8466812627433173, 'recall': 0.8449980086172563, 'f1-score': 0.8455685725239289, 'support': 27619.0} |
83
+ | No log | 11.0 | 451 | 0.5618 | {'precision': 0.5962230215827338, 'recall': 0.6362763915547025, 'f1-score': 0.6155988857938719, 'support': 4168.0} | {'precision': 0.7545803152961227, 'recall': 0.8229553903345725, 'f1-score': 0.7872860635696821, 'support': 2152.0} | {'precision': 0.9359309326366012, 'recall': 0.8930197268588771, 'f1-score': 0.9139719341061624, 'support': 9226.0} | {'precision': 0.880459196406289, 'recall': 0.8766669427648471, 'f1-score': 0.8785589773387565, 'support': 12073.0} | 0.8417 | {'precision': 0.7917983664804367, 'recall': 0.8072296128782498, 'f1-score': 0.7988539652021183, 'support': 27619.0} | {'precision': 0.8462868697343314, 'recall': 0.8416669683913248, 'f1-score': 0.8435933003463223, 'support': 27619.0} |
84
+ | No log | 12.0 | 492 | 0.5674 | {'precision': 0.6009334889148191, 'recall': 0.6178023032629558, 'f1-score': 0.6092511534366496, 'support': 4168.0} | {'precision': 0.7783711615487316, 'recall': 0.8127323420074349, 'f1-score': 0.7951807228915663, 'support': 2152.0} | {'precision': 0.9334009465855307, 'recall': 0.8977888575764145, 'f1-score': 0.9152486187845303, 'support': 9226.0} | {'precision': 0.8738229755178908, 'recall': 0.8839559347303901, 'f1-score': 0.8788602487029565, 'support': 12073.0} | 0.8429 | {'precision': 0.7966321431417431, 'recall': 0.8030698593942989, 'f1-score': 0.7996351859539257, 'support': 27619.0} | {'precision': 0.8451054505259219, 'recall': 0.8428617980375829, 'f1-score': 0.8438086557327736, 'support': 27619.0} |
85
+
86
+
87
+ ### Framework versions
88
+
89
+ - Transformers 4.37.2
90
+ - Pytorch 2.2.0+cu121
91
+ - Datasets 2.17.0
92
+ - Tokenizers 0.15.2
meta_data/README_s42_e13.md ADDED
@@ -0,0 +1,93 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ base_model: allenai/longformer-base-4096
4
+ tags:
5
+ - generated_from_trainer
6
+ datasets:
7
+ - essays_su_g
8
+ metrics:
9
+ - accuracy
10
+ model-index:
11
+ - name: longformer-simple
12
+ results:
13
+ - task:
14
+ name: Token Classification
15
+ type: token-classification
16
+ dataset:
17
+ name: essays_su_g
18
+ type: essays_su_g
19
+ config: simple
20
+ split: train[80%:100%]
21
+ args: simple
22
+ metrics:
23
+ - name: Accuracy
24
+ type: accuracy
25
+ value: 0.8416669683913248
26
+ ---
27
+
28
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
29
+ should probably proofread and complete it, then remove this comment. -->
30
+
31
+ # longformer-simple
32
+
33
+ This model is a fine-tuned version of [allenai/longformer-base-4096](https://huggingface.co/allenai/longformer-base-4096) on the essays_su_g dataset.
34
+ It achieves the following results on the evaluation set:
35
+ - Loss: 0.5991
36
+ - Claim: {'precision': 0.6002427184466019, 'recall': 0.5933301343570058, 'f1-score': 0.5967664092664092, 'support': 4168.0}
37
+ - Majorclaim: {'precision': 0.7753721244925575, 'recall': 0.7987918215613383, 'f1-score': 0.7869077592126345, 'support': 2152.0}
38
+ - O: {'precision': 0.9312121891104638, 'recall': 0.9009321482766096, 'f1-score': 0.9158219479947113, 'support': 9226.0}
39
+ - Premise: {'precision': 0.8693752023308514, 'recall': 0.8897539965211629, 'f1-score': 0.8794465594170862, 'support': 12073.0}
40
+ - Accuracy: 0.8417
41
+ - Macro avg: {'precision': 0.7940505585951186, 'recall': 0.7957020251790292, 'f1-score': 0.7947356689727103, 'support': 27619.0}
42
+ - Weighted avg: {'precision': 0.8420921444247412, 'recall': 0.8416669683913248, 'f1-score': 0.8417277778228636, 'support': 27619.0}
43
+
44
+ ## Model description
45
+
46
+ More information needed
47
+
48
+ ## Intended uses & limitations
49
+
50
+ More information needed
51
+
52
+ ## Training and evaluation data
53
+
54
+ More information needed
55
+
56
+ ## Training procedure
57
+
58
+ ### Training hyperparameters
59
+
60
+ The following hyperparameters were used during training:
61
+ - learning_rate: 2e-05
62
+ - train_batch_size: 8
63
+ - eval_batch_size: 8
64
+ - seed: 42
65
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
66
+ - lr_scheduler_type: linear
67
+ - num_epochs: 13
68
+
69
+ ### Training results
70
+
71
+ | Training Loss | Epoch | Step | Validation Loss | Claim | Majorclaim | O | Premise | Accuracy | Macro avg | Weighted avg |
72
+ |:-------------:|:-----:|:----:|:---------------:|:--------------------------------------------------------------------------------------------------------------------:|:------------------------------------------------------------------------------------------------------------------:|:------------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------------------:|:--------:|:-------------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------------------:|
73
+ | No log | 1.0 | 41 | 0.5694 | {'precision': 0.49641025641025643, 'recall': 0.23224568138195778, 'f1-score': 0.3164432821183394, 'support': 4168.0} | {'precision': 0.5319789315274642, 'recall': 0.6570631970260223, 'f1-score': 0.5879417879417879, 'support': 2152.0} | {'precision': 0.9192651679961324, 'recall': 0.8244092781270324, 'f1-score': 0.8692571428571428, 'support': 9226.0} | {'precision': 0.7774988125127231, 'recall': 0.9490598856953533, 'f1-score': 0.8547556881760537, 'support': 12073.0} | 0.7765 | {'precision': 0.6812882921116441, 'recall': 0.6656945105575914, 'f1-score': 0.657099475273331, 'support': 27619.0} | {'precision': 0.7633057030581657, 'recall': 0.776494442231797, 'f1-score': 0.7575733426579333, 'support': 27619.0} |
74
+ | No log | 2.0 | 82 | 0.4419 | {'precision': 0.6089438629876308, 'recall': 0.46065259117082535, 'f1-score': 0.5245185084004917, 'support': 4168.0} | {'precision': 0.7108843537414966, 'recall': 0.7769516728624535, 'f1-score': 0.7424511545293072, 'support': 2152.0} | {'precision': 0.9222750963063675, 'recall': 0.8822891827444179, 'f1-score': 0.901839131398183, 'support': 9226.0} | {'precision': 0.8349638771824203, 'recall': 0.9189927938374887, 'f1-score': 0.8749654982059067, 'support': 12073.0} | 0.8265 | {'precision': 0.7692667975544787, 'recall': 0.7597215601537963, 'f1-score': 0.7609435731334722, 'support': 27619.0} | {'precision': 0.820353020671641, 'recall': 0.8264962525797458, 'f1-score': 0.820731174686986, 'support': 27619.0} |
75
+ | No log | 3.0 | 123 | 0.4293 | {'precision': 0.559539052496799, 'recall': 0.6290786948176583, 'f1-score': 0.5922746781115881, 'support': 4168.0} | {'precision': 0.7270703472840605, 'recall': 0.7588289962825279, 'f1-score': 0.7426102773988177, 'support': 2152.0} | {'precision': 0.9509253731343283, 'recall': 0.8632126598742683, 'f1-score': 0.9049485824669052, 'support': 9226.0} | {'precision': 0.8691520467836257, 'recall': 0.8863579889008532, 'f1-score': 0.877670699200328, 'support': 12073.0} | 0.8299 | {'precision': 0.7766717049247034, 'recall': 0.7843695849688269, 'f1-score': 0.7793760592944097, 'support': 27619.0} | {'precision': 0.8386735331300186, 'recall': 0.8298634997646548, 'f1-score': 0.8331899108807915, 'support': 27619.0} |
76
+ | No log | 4.0 | 164 | 0.4211 | {'precision': 0.6451208032632569, 'recall': 0.4932821497120921, 'f1-score': 0.559075458871516, 'support': 4168.0} | {'precision': 0.78687761749651, 'recall': 0.7857806691449815, 'f1-score': 0.786328760753313, 'support': 2152.0} | {'precision': 0.9139459459459459, 'recall': 0.9163234337741166, 'f1-score': 0.915133145702533, 'support': 9226.0} | {'precision': 0.849535793754316, 'recall': 0.917087716391949, 'f1-score': 0.8820202342069625, 'support': 12073.0} | 0.8426 | {'precision': 0.7988700401150072, 'recall': 0.7781184922557848, 'f1-score': 0.7856393998835811, 'support': 27619.0} | {'precision': 0.8353211584831782, 'recall': 0.8426445562837177, 'f1-score': 0.836889630165822, 'support': 27619.0} |
77
+ | No log | 5.0 | 205 | 0.4516 | {'precision': 0.5637651821862348, 'recall': 0.6681861804222649, 'f1-score': 0.6115502854633289, 'support': 4168.0} | {'precision': 0.7106579453636014, 'recall': 0.858271375464684, 'f1-score': 0.7775205219953694, 'support': 2152.0} | {'precision': 0.930641011298803, 'recall': 0.901690873618036, 'f1-score': 0.9159372419488027, 'support': 9226.0} | {'precision': 0.9066511085180864, 'recall': 0.8366603164085149, 'f1-score': 0.8702507107779788, 'support': 12073.0} | 0.8346 | {'precision': 0.7779288118416814, 'recall': 0.8162021864783748, 'f1-score': 0.79381469004637, 'support': 27619.0} | {'precision': 0.8476484297460557, 'recall': 0.8346428183496868, 'f1-score': 0.8392461558560188, 'support': 27619.0} |
78
+ | No log | 6.0 | 246 | 0.4529 | {'precision': 0.5875982161817795, 'recall': 0.6638675623800384, 'f1-score': 0.6234088092824153, 'support': 4168.0} | {'precision': 0.7763975155279503, 'recall': 0.8131970260223048, 'f1-score': 0.7943713118474807, 'support': 2152.0} | {'precision': 0.9178008209116439, 'recall': 0.9209841751571646, 'f1-score': 0.9193897424799826, 'support': 9226.0} | {'precision': 0.9001579224425338, 'recall': 0.8498301996189845, 'f1-score': 0.8742703762089388, 'support': 12073.0} | 0.8427 | {'precision': 0.7954886187659769, 'recall': 0.8119697407946231, 'f1-score': 0.8028600599547043, 'support': 27619.0} | {'precision': 0.8492397910801023, 'recall': 0.8426807632426953, 'f1-score': 0.8452590968635983, 'support': 27619.0} |
79
+ | No log | 7.0 | 287 | 0.5035 | {'precision': 0.6264514301897479, 'recall': 0.5307101727447217, 'f1-score': 0.574620080529939, 'support': 4168.0} | {'precision': 0.8070770722249152, 'recall': 0.7736988847583643, 'f1-score': 0.7900355871886122, 'support': 2152.0} | {'precision': 0.9218476357267951, 'recall': 0.9128549750704531, 'f1-score': 0.917329266964383, 'support': 9226.0} | {'precision': 0.8542943595313833, 'recall': 0.9120351196885612, 'f1-score': 0.8822209758833427, 'support': 12073.0} | 0.8440 | {'precision': 0.8024176244182103, 'recall': 0.7823247880655251, 'f1-score': 0.7910514776415692, 'support': 27619.0} | {'precision': 0.8387972595060172, 'recall': 0.8439842137658858, 'f1-score': 0.8403456583559027, 'support': 27619.0} |
80
+ | No log | 8.0 | 328 | 0.5478 | {'precision': 0.5803487276154571, 'recall': 0.5909309021113244, 'f1-score': 0.5855920114122681, 'support': 4168.0} | {'precision': 0.7547730165464573, 'recall': 0.8266728624535316, 'f1-score': 0.7890884896872921, 'support': 2152.0} | {'precision': 0.948723924950472, 'recall': 0.8823975720789075, 'f1-score': 0.9143595215364744, 'support': 9226.0} | {'precision': 0.8638739245798827, 'recall': 0.8899196554294707, 'f1-score': 0.8767033863729091, 'support': 12073.0} | 0.8374 | {'precision': 0.7869298984230673, 'recall': 0.7974802480183085, 'f1-score': 0.7914358522522359, 'support': 27619.0} | {'precision': 0.8409298617384836, 'recall': 0.8373583402730005, 'f1-score': 0.8385237286921696, 'support': 27619.0} |
81
+ | No log | 9.0 | 369 | 0.5551 | {'precision': 0.5595470519328387, 'recall': 0.6876199616122841, 'f1-score': 0.6170075349838535, 'support': 4168.0} | {'precision': 0.7613151152860803, 'recall': 0.8285315985130112, 'f1-score': 0.7935024477080552, 'support': 2152.0} | {'precision': 0.9396424097483203, 'recall': 0.8943203988727509, 'f1-score': 0.916421391681013, 'support': 9226.0} | {'precision': 0.8944082996307368, 'recall': 0.8426240371075955, 'f1-score': 0.8677442743208086, 'support': 12073.0} | 0.8354 | {'precision': 0.788728219149494, 'recall': 0.8132739990264104, 'f1-score': 0.7986689121734325, 'support': 27619.0} | {'precision': 0.84861416106056, 'recall': 0.8354031644882146, 'f1-score': 0.8403810802999596, 'support': 27619.0} |
82
+ | No log | 10.0 | 410 | 0.5735 | {'precision': 0.6080889309366131, 'recall': 0.6168426103646834, 'f1-score': 0.6124344926155313, 'support': 4168.0} | {'precision': 0.7508333333333334, 'recall': 0.837360594795539, 'f1-score': 0.7917398945518453, 'support': 2152.0} | {'precision': 0.9385544915640675, 'recall': 0.8923693908519401, 'f1-score': 0.914879431047894, 'support': 9226.0} | {'precision': 0.8762582862754726, 'recall': 0.8868549656257765, 'f1-score': 0.8815247818211758, 'support': 12073.0} | 0.8441 | {'precision': 0.7934337605273717, 'recall': 0.8083568904094847, 'f1-score': 0.8001446500091116, 'support': 27619.0} | {'precision': 0.8468256644647166, 'recall': 0.8440928346428184, 'f1-score': 0.8450623679377253, 'support': 27619.0} |
83
+ | No log | 11.0 | 451 | 0.5724 | {'precision': 0.5911991199119911, 'recall': 0.6446737044145874, 'f1-score': 0.6167795248479283, 'support': 4168.0} | {'precision': 0.7570093457943925, 'recall': 0.8280669144981413, 'f1-score': 0.7909454061251665, 'support': 2152.0} | {'precision': 0.9376067402937492, 'recall': 0.8925861695209192, 'f1-score': 0.9145427286356822, 'support': 9226.0} | {'precision': 0.8821311887408897, 'recall': 0.8721941522405368, 'f1-score': 0.8771345272803, 'support': 12073.0} | 0.8412 | {'precision': 0.7919865986852557, 'recall': 0.8093802351685462, 'f1-score': 0.7998505467222692, 'support': 27619.0} | {'precision': 0.8470086415714401, 'recall': 0.8412324848835946, 'f1-score': 0.8436246039246672, 'support': 27619.0} |
84
+ | No log | 12.0 | 492 | 0.5952 | {'precision': 0.6041920545941993, 'recall': 0.5947696737044146, 'f1-score': 0.5994438399226211, 'support': 4168.0} | {'precision': 0.7799544419134397, 'recall': 0.7955390334572491, 'f1-score': 0.7876696572348746, 'support': 2152.0} | {'precision': 0.930126130148454, 'recall': 0.9032083243008888, 'f1-score': 0.9164696178168821, 'support': 9226.0} | {'precision': 0.869600388286685, 'recall': 0.8904166321543941, 'f1-score': 0.8798854102721507, 'support': 12073.0} | 0.8427 | {'precision': 0.7959682537356945, 'recall': 0.7959834159042366, 'f1-score': 0.7958671313116321, 'support': 27619.0} | {'precision': 0.8427808250509116, 'recall': 0.8426807632426953, 'f1-score': 0.842599380113732, 'support': 27619.0} |
85
+ | 0.2759 | 13.0 | 533 | 0.5991 | {'precision': 0.6002427184466019, 'recall': 0.5933301343570058, 'f1-score': 0.5967664092664092, 'support': 4168.0} | {'precision': 0.7753721244925575, 'recall': 0.7987918215613383, 'f1-score': 0.7869077592126345, 'support': 2152.0} | {'precision': 0.9312121891104638, 'recall': 0.9009321482766096, 'f1-score': 0.9158219479947113, 'support': 9226.0} | {'precision': 0.8693752023308514, 'recall': 0.8897539965211629, 'f1-score': 0.8794465594170862, 'support': 12073.0} | 0.8417 | {'precision': 0.7940505585951186, 'recall': 0.7957020251790292, 'f1-score': 0.7947356689727103, 'support': 27619.0} | {'precision': 0.8420921444247412, 'recall': 0.8416669683913248, 'f1-score': 0.8417277778228636, 'support': 27619.0} |
86
+
87
+
88
+ ### Framework versions
89
+
90
+ - Transformers 4.37.2
91
+ - Pytorch 2.2.0+cu121
92
+ - Datasets 2.17.0
93
+ - Tokenizers 0.15.2
meta_data/README_s42_e14.md ADDED
@@ -0,0 +1,94 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ base_model: allenai/longformer-base-4096
4
+ tags:
5
+ - generated_from_trainer
6
+ datasets:
7
+ - essays_su_g
8
+ metrics:
9
+ - accuracy
10
+ model-index:
11
+ - name: longformer-simple
12
+ results:
13
+ - task:
14
+ name: Token Classification
15
+ type: token-classification
16
+ dataset:
17
+ name: essays_su_g
18
+ type: essays_su_g
19
+ config: simple
20
+ split: train[80%:100%]
21
+ args: simple
22
+ metrics:
23
+ - name: Accuracy
24
+ type: accuracy
25
+ value: 0.8449618016582787
26
+ ---
27
+
28
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
29
+ should probably proofread and complete it, then remove this comment. -->
30
+
31
+ # longformer-simple
32
+
33
+ This model is a fine-tuned version of [allenai/longformer-base-4096](https://huggingface.co/allenai/longformer-base-4096) on the essays_su_g dataset.
34
+ It achieves the following results on the evaluation set:
35
+ - Loss: 0.6109
36
+ - Claim: {'precision': 0.6048215551878988, 'recall': 0.6139635316698656, 'f1-score': 0.6093582569353494, 'support': 4168.0}
37
+ - Majorclaim: {'precision': 0.7849462365591398, 'recall': 0.8141263940520446, 'f1-score': 0.7992700729927007, 'support': 2152.0}
38
+ - O: {'precision': 0.931049822064057, 'recall': 0.9074355083459787, 'f1-score': 0.9190910088923043, 'support': 9226.0}
39
+ - Premise: {'precision': 0.8758632028937849, 'recall': 0.8824650045556199, 'f1-score': 0.8791517101951561, 'support': 12073.0}
40
+ - Accuracy: 0.8450
41
+ - Macro avg: {'precision': 0.7991702041762201, 'recall': 0.8044976096558771, 'f1-score': 0.8017177622538776, 'support': 27619.0}
42
+ - Weighted avg: {'precision': 0.8463109688981529, 'recall': 0.8449618016582787, 'f1-score': 0.8455543885446015, 'support': 27619.0}
43
+
44
+ ## Model description
45
+
46
+ More information needed
47
+
48
+ ## Intended uses & limitations
49
+
50
+ More information needed
51
+
52
+ ## Training and evaluation data
53
+
54
+ More information needed
55
+
56
+ ## Training procedure
57
+
58
+ ### Training hyperparameters
59
+
60
+ The following hyperparameters were used during training:
61
+ - learning_rate: 2e-05
62
+ - train_batch_size: 8
63
+ - eval_batch_size: 8
64
+ - seed: 42
65
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
66
+ - lr_scheduler_type: linear
67
+ - num_epochs: 14
68
+
69
+ ### Training results
70
+
71
+ | Training Loss | Epoch | Step | Validation Loss | Claim | Majorclaim | O | Premise | Accuracy | Macro avg | Weighted avg |
72
+ |:-------------:|:-----:|:----:|:---------------:|:-------------------------------------------------------------------------------------------------------------------:|:------------------------------------------------------------------------------------------------------------------:|:------------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------------------:|:--------:|:-------------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------------------:|
73
+ | No log | 1.0 | 41 | 0.5692 | {'precision': 0.4959266802443992, 'recall': 0.2336852207293666, 'f1-score': 0.3176777560339204, 'support': 4168.0} | {'precision': 0.5318267419962335, 'recall': 0.6561338289962825, 'f1-score': 0.5874765966299147, 'support': 2152.0} | {'precision': 0.9195207551736657, 'recall': 0.8235421634511164, 'f1-score': 0.8688890159528846, 'support': 9226.0} | {'precision': 0.7774309560968989, 'recall': 0.9489770562411993, 'f1-score': 0.8546810891458411, 'support': 12073.0} | 0.7763 | {'precision': 0.6811762833777993, 'recall': 0.6655845673544912, 'f1-score': 0.6571811144406402, 'support': 27619.0} | {'precision': 0.7632765839539684, 'recall': 0.7763134074369094, 'f1-score': 0.7575678110552883, 'support': 27619.0} |
74
+ | No log | 2.0 | 82 | 0.4421 | {'precision': 0.6073619631901841, 'recall': 0.45129558541266795, 'f1-score': 0.5178251892635926, 'support': 4168.0} | {'precision': 0.7044500419815282, 'recall': 0.7797397769516728, 'f1-score': 0.7401852668725187, 'support': 2152.0} | {'precision': 0.9249115599680475, 'recall': 0.8784955560372859, 'f1-score': 0.9011062315859693, 'support': 9226.0} | {'precision': 0.8318756073858115, 'recall': 0.9217261658245672, 'f1-score': 0.874499017681729, 'support': 12073.0} | 0.8252 | {'precision': 0.7671497931313928, 'recall': 0.7578142710565485, 'f1-score': 0.7584039263509523, 'support': 27619.0} | {'precision': 0.8191436841723105, 'recall': 0.8252290090155328, 'f1-score': 0.8190957969602078, 'support': 27619.0} |
75
+ | No log | 3.0 | 123 | 0.4286 | {'precision': 0.5590650663297536, 'recall': 0.636996161228407, 'f1-score': 0.5954917573174835, 'support': 4168.0} | {'precision': 0.7408256880733946, 'recall': 0.7504646840148699, 'f1-score': 0.7456140350877194, 'support': 2152.0} | {'precision': 0.949893137022085, 'recall': 0.8671146759158899, 'f1-score': 0.9066183136899364, 'support': 9226.0} | {'precision': 0.8706390609716336, 'recall': 0.8847013998177752, 'f1-score': 0.877613902469085, 'support': 12073.0} | 0.8310 | {'precision': 0.7801057380992167, 'recall': 0.7848192302442354, 'f1-score': 0.7813345021410562, 'support': 27619.0} | {'precision': 0.839978983398119, 'recall': 0.8309859154929577, 'f1-score': 0.834442385843827, 'support': 27619.0} |
76
+ | No log | 4.0 | 164 | 0.4205 | {'precision': 0.6486063263388663, 'recall': 0.4968809980806142, 'f1-score': 0.5626952859665806, 'support': 4168.0} | {'precision': 0.7857142857142857, 'recall': 0.7973977695167286, 'f1-score': 0.7915129151291513, 'support': 2152.0} | {'precision': 0.9162780609478365, 'recall': 0.9157814871016692, 'f1-score': 0.9160297067273812, 'support': 9226.0} | {'precision': 0.8506259119883266, 'recall': 0.9174190342085645, 'f1-score': 0.8827608193193592, 'support': 12073.0} | 0.8441 | {'precision': 0.8003061462473288, 'recall': 0.7818698222268942, 'f1-score': 0.788249681785618, 'support': 27619.0} | {'precision': 0.8370120691110231, 'recall': 0.8440566276838408, 'f1-score': 0.8384630577202681, 'support': 27619.0} |
77
+ | No log | 5.0 | 205 | 0.4555 | {'precision': 0.5530616263043906, 'recall': 0.6739443378119002, 'f1-score': 0.6075483940737536, 'support': 4168.0} | {'precision': 0.7023945267958951, 'recall': 0.8587360594795539, 'f1-score': 0.7727367760819569, 'support': 2152.0} | {'precision': 0.931986531986532, 'recall': 0.9000650336006937, 'f1-score': 0.9157476841640935, 'support': 9226.0} | {'precision': 0.9083553050277298, 'recall': 0.8275490764515862, 'f1-score': 0.8660714285714286, 'support': 12073.0} | 0.8310 | {'precision': 0.773949497528637, 'recall': 0.8150736268359334, 'f1-score': 0.7905260707228081, 'support': 27619.0} | {'precision': 0.8465837004167056, 'recall': 0.8310221224519353, 'f1-score': 0.83637929468368, 'support': 27619.0} |
78
+ | No log | 6.0 | 246 | 0.4529 | {'precision': 0.5858436907520539, 'recall': 0.6672264875239923, 'f1-score': 0.6238923163208077, 'support': 4168.0} | {'precision': 0.779319606087735, 'recall': 0.8090148698884758, 'f1-score': 0.7938896488828089, 'support': 2152.0} | {'precision': 0.9153763440860215, 'recall': 0.9227184045089963, 'f1-score': 0.9190327107848429, 'support': 9226.0} | {'precision': 0.9016581407655672, 'recall': 0.8467655098152903, 'f1-score': 0.8733501345521336, 'support': 12073.0} | 0.8421 | {'precision': 0.7955494454228443, 'recall': 0.8114313179341887, 'f1-score': 0.8025412026351483, 'support': 27619.0} | {'precision': 0.8490485962328721, 'recall': 0.842101451899055, 'f1-score': 0.8447730063713313, 'support': 27619.0} |
79
+ | No log | 7.0 | 287 | 0.5058 | {'precision': 0.6189119908857875, 'recall': 0.5213531669865643, 'f1-score': 0.565959109259018, 'support': 4168.0} | {'precision': 0.8097773475314618, 'recall': 0.7774163568773235, 'f1-score': 0.7932669511616881, 'support': 2152.0} | {'precision': 0.9237928391547137, 'recall': 0.9144808150877953, 'f1-score': 0.9191132414619532, 'support': 9226.0} | {'precision': 0.8518862808893021, 'recall': 0.9108755073304067, 'f1-score': 0.8803938835961892, 'support': 12073.0} | 0.8429 | {'precision': 0.8010921146153162, 'recall': 0.7810314615705225, 'f1-score': 0.7896832963697121, 'support': 27619.0} | {'precision': 0.8374670275215469, 'recall': 0.8428980049965603, 'f1-score': 0.8390876631549409, 'support': 27619.0} |
80
+ | No log | 8.0 | 328 | 0.5503 | {'precision': 0.579874797124971, 'recall': 0.6000479846449136, 'f1-score': 0.5897889399834925, 'support': 4168.0} | {'precision': 0.7640207075064711, 'recall': 0.8229553903345725, 'f1-score': 0.7923937360178971, 'support': 2152.0} | {'precision': 0.9484103877955048, 'recall': 0.882722740082376, 'f1-score': 0.9143883680458093, 'support': 9226.0} | {'precision': 0.8650108862188534, 'recall': 0.8885115547088545, 'f1-score': 0.8766037427474054, 'support': 12073.0} | 0.8379 | {'precision': 0.7893291946614501, 'recall': 0.7985594174426791, 'f1-score': 0.793293696698651, 'support': 27619.0} | {'precision': 0.8419711569605107, 'recall': 0.8379376516166407, 'f1-score': 0.8393807050053141, 'support': 27619.0} |
81
+ | No log | 9.0 | 369 | 0.5586 | {'precision': 0.5553219950315307, 'recall': 0.6972168905950096, 'f1-score': 0.6182321029677693, 'support': 4168.0} | {'precision': 0.762192490289167, 'recall': 0.820631970260223, 'f1-score': 0.7903334079212352, 'support': 2152.0} | {'precision': 0.9405251141552512, 'recall': 0.8930197268588771, 'f1-score': 0.9161570110085622, 'support': 9226.0} | {'precision': 0.8953046246352463, 'recall': 0.8386482233082084, 'f1-score': 0.8660508083140878, 'support': 12073.0} | 0.8341 | {'precision': 0.7883360560277988, 'recall': 0.8123792027555796, 'f1-score': 0.7976933325529135, 'support': 27619.0} | {'precision': 0.8487315887907376, 'recall': 0.8340635070060466, 'f1-score': 0.8394903831187637, 'support': 27619.0} |
82
+ | No log | 10.0 | 410 | 0.5841 | {'precision': 0.5999538319482918, 'recall': 0.6235604606525912, 'f1-score': 0.611529411764706, 'support': 4168.0} | {'precision': 0.7489643744821872, 'recall': 0.8401486988847584, 'f1-score': 0.791940429259746, 'support': 2152.0} | {'precision': 0.9401190748797802, 'recall': 0.8899848254931715, 'f1-score': 0.9143652561247216, 'support': 9226.0} | {'precision': 0.8760194414696433, 'recall': 0.8808084154725421, 'f1-score': 0.8784074012886173, 'support': 12073.0} | 0.8419 | {'precision': 0.7912641806949756, 'recall': 0.8086256001257658, 'f1-score': 0.7990606246094477, 'support': 27619.0} | {'precision': 0.8458706038288859, 'recall': 0.8418842101451899, 'f1-score': 0.8434069590052654, 'support': 27619.0} |
83
+ | No log | 11.0 | 451 | 0.5806 | {'precision': 0.5889105479748754, 'recall': 0.6523512476007678, 'f1-score': 0.6190096755833807, 'support': 4168.0} | {'precision': 0.7572354211663067, 'recall': 0.8145910780669146, 'f1-score': 0.7848668009850012, 'support': 2152.0} | {'precision': 0.9387871202639663, 'recall': 0.8943203988727509, 'f1-score': 0.91601443241743, 'support': 9226.0} | {'precision': 0.8838460245419398, 'recall': 0.8710345398823822, 'f1-score': 0.8773935171665763, 'support': 12073.0} | 0.8414 | {'precision': 0.792194778486772, 'recall': 0.8080743161057039, 'f1-score': 0.7993211065380971, 'support': 27619.0} | {'precision': 0.8478247878691975, 'recall': 0.8414135196784822, 'f1-score': 0.8440923556170222, 'support': 27619.0} |
84
+ | No log | 12.0 | 492 | 0.6116 | {'precision': 0.600094540297802, 'recall': 0.6091650671785028, 'f1-score': 0.6045957852125253, 'support': 4168.0} | {'precision': 0.7798594847775175, 'recall': 0.7736988847583643, 'f1-score': 0.7767669699090273, 'support': 2152.0} | {'precision': 0.9319225170753554, 'recall': 0.9021244309559939, 'f1-score': 0.9167814066200364, 'support': 9226.0} | {'precision': 0.8723421522480117, 'recall': 0.8903338027002402, 'f1-score': 0.8812461569993852, 'support': 12073.0} | 0.8428 | {'precision': 0.7960546735996716, 'recall': 0.7938305463982752, 'f1-score': 0.7948475796852434, 'support': 27619.0} | {'precision': 0.8439536406759814, 'recall': 0.8427531771606502, 'f1-score': 0.843226324738045, 'support': 27619.0} |
85
+ | 0.2736 | 13.0 | 533 | 0.6151 | {'precision': 0.5982355746304244, 'recall': 0.6019673704414588, 'f1-score': 0.6000956708921311, 'support': 4168.0} | {'precision': 0.7789473684210526, 'recall': 0.7908921933085502, 'f1-score': 0.7848743370993774, 'support': 2152.0} | {'precision': 0.9333632488220777, 'recall': 0.9017992629525254, 'f1-score': 0.9173098125689085, 'support': 9226.0} | {'precision': 0.8696251825409703, 'recall': 0.8878489190756232, 'f1-score': 0.8786425673183327, 'support': 12073.0} | 0.8418 | {'precision': 0.7950428436036313, 'recall': 0.7956269364445394, 'f1-score': 0.7952305969696873, 'support': 27619.0} | {'precision': 0.8428956433741749, 'recall': 0.8418117962272349, 'f1-score': 0.8422173277711444, 'support': 27619.0} |
86
+ | 0.2736 | 14.0 | 574 | 0.6109 | {'precision': 0.6048215551878988, 'recall': 0.6139635316698656, 'f1-score': 0.6093582569353494, 'support': 4168.0} | {'precision': 0.7849462365591398, 'recall': 0.8141263940520446, 'f1-score': 0.7992700729927007, 'support': 2152.0} | {'precision': 0.931049822064057, 'recall': 0.9074355083459787, 'f1-score': 0.9190910088923043, 'support': 9226.0} | {'precision': 0.8758632028937849, 'recall': 0.8824650045556199, 'f1-score': 0.8791517101951561, 'support': 12073.0} | 0.8450 | {'precision': 0.7991702041762201, 'recall': 0.8044976096558771, 'f1-score': 0.8017177622538776, 'support': 27619.0} | {'precision': 0.8463109688981529, 'recall': 0.8449618016582787, 'f1-score': 0.8455543885446015, 'support': 27619.0} |
87
+
88
+
89
+ ### Framework versions
90
+
91
+ - Transformers 4.37.2
92
+ - Pytorch 2.2.0+cu121
93
+ - Datasets 2.17.0
94
+ - Tokenizers 0.15.2
meta_data/README_s42_e15.md ADDED
@@ -0,0 +1,95 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ base_model: allenai/longformer-base-4096
4
+ tags:
5
+ - generated_from_trainer
6
+ datasets:
7
+ - essays_su_g
8
+ metrics:
9
+ - accuracy
10
+ model-index:
11
+ - name: longformer-simple
12
+ results:
13
+ - task:
14
+ name: Token Classification
15
+ type: token-classification
16
+ dataset:
17
+ name: essays_su_g
18
+ type: essays_su_g
19
+ config: simple
20
+ split: train[80%:100%]
21
+ args: simple
22
+ metrics:
23
+ - name: Accuracy
24
+ type: accuracy
25
+ value: 0.8436583511350881
26
+ ---
27
+
28
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
29
+ should probably proofread and complete it, then remove this comment. -->
30
+
31
+ # longformer-simple
32
+
33
+ This model is a fine-tuned version of [allenai/longformer-base-4096](https://huggingface.co/allenai/longformer-base-4096) on the essays_su_g dataset.
34
+ It achieves the following results on the evaluation set:
35
+ - Loss: 0.6438
36
+ - Claim: {'precision': 0.6039084842707341, 'recall': 0.6079654510556622, 'f1-score': 0.6059301769488283, 'support': 4168.0}
37
+ - Majorclaim: {'precision': 0.7791218637992832, 'recall': 0.8080855018587361, 'f1-score': 0.7933394160583942, 'support': 2152.0}
38
+ - O: {'precision': 0.936604624929498, 'recall': 0.8999566442662043, 'f1-score': 0.9179149853518324, 'support': 9226.0}
39
+ - Premise: {'precision': 0.8701119584617881, 'recall': 0.8883458958005467, 'f1-score': 0.8791343907537195, 'support': 12073.0}
40
+ - Accuracy: 0.8437
41
+ - Macro avg: {'precision': 0.7974367328653259, 'recall': 0.8010883732452874, 'f1-score': 0.7990797422781937, 'support': 27619.0}
42
+ - Weighted avg: {'precision': 0.8450608913228282, 'recall': 0.8436583511350881, 'f1-score': 0.8441745376482147, 'support': 27619.0}
43
+
44
+ ## Model description
45
+
46
+ More information needed
47
+
48
+ ## Intended uses & limitations
49
+
50
+ More information needed
51
+
52
+ ## Training and evaluation data
53
+
54
+ More information needed
55
+
56
+ ## Training procedure
57
+
58
+ ### Training hyperparameters
59
+
60
+ The following hyperparameters were used during training:
61
+ - learning_rate: 2e-05
62
+ - train_batch_size: 8
63
+ - eval_batch_size: 8
64
+ - seed: 42
65
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
66
+ - lr_scheduler_type: linear
67
+ - num_epochs: 15
68
+
69
+ ### Training results
70
+
71
+ | Training Loss | Epoch | Step | Validation Loss | Claim | Majorclaim | O | Premise | Accuracy | Macro avg | Weighted avg |
72
+ |:-------------:|:-----:|:----:|:---------------:|:------------------------------------------------------------------------------------------------------------------:|:------------------------------------------------------------------------------------------------------------------:|:------------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------------------:|:--------:|:-------------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------------------:|
73
+ | No log | 1.0 | 41 | 0.5691 | {'precision': 0.4949392712550607, 'recall': 0.23464491362763915, 'f1-score': 0.318359375, 'support': 4168.0} | {'precision': 0.5329815303430079, 'recall': 0.6570631970260223, 'f1-score': 0.5885535900104059, 'support': 2152.0} | {'precision': 0.919937015503876, 'recall': 0.8232169954476479, 'f1-score': 0.86889371925409, 'support': 9226.0} | {'precision': 0.7775213791231166, 'recall': 0.9488942267870455, 'f1-score': 0.8547021300406611, 'support': 12073.0} | 0.7764 | {'precision': 0.6813447990562653, 'recall': 0.6659548332220888, 'f1-score': 0.6576272035762892, 'support': 27619.0} | {'precision': 0.7633961277048913, 'recall': 0.7763858213548644, 'f1-score': 0.7577653597350205, 'support': 27619.0} |
74
+ | No log | 2.0 | 82 | 0.4424 | {'precision': 0.6061801446416831, 'recall': 0.44241842610364684, 'f1-score': 0.511511789181692, 'support': 4168.0} | {'precision': 0.6986357999173212, 'recall': 0.7853159851301115, 'f1-score': 0.7394443229052724, 'support': 2152.0} | {'precision': 0.9271515569343904, 'recall': 0.8745935399956645, 'f1-score': 0.900105973562385, 'support': 9226.0} | {'precision': 0.8290598290598291, 'recall': 0.9239625610867225, 'f1-score': 0.8739423378251332, 'support': 12073.0} | 0.8240 | {'precision': 0.765256832638306, 'recall': 0.7565726280790362, 'f1-score': 0.7562511058686207, 'support': 27619.0} | {'precision': 0.818029713776915, 'recall': 0.8239979724102973, 'f1-score': 0.8175078343477619, 'support': 27619.0} |
75
+ | No log | 3.0 | 123 | 0.4282 | {'precision': 0.5578358208955224, 'recall': 0.6456333973128598, 'f1-score': 0.598532028469751, 'support': 4168.0} | {'precision': 0.7531854648419065, 'recall': 0.741635687732342, 'f1-score': 0.74736595645048, 'support': 2152.0} | {'precision': 0.9484389782403028, 'recall': 0.8692824626056797, 'f1-score': 0.9071372016740188, 'support': 9226.0} | {'precision': 0.872013093289689, 'recall': 0.8826306634639278, 'f1-score': 0.8772897542501955, 'support': 12073.0} | 0.8314 | {'precision': 0.7828683393168552, 'recall': 0.7847955527787023, 'f1-score': 0.7825812352111113, 'support': 27619.0} | {'precision': 0.8408713896362565, 'recall': 0.831420399000688, 'f1-score': 0.835069338449997, 'support': 27619.0} |
76
+ | No log | 4.0 | 164 | 0.4201 | {'precision': 0.6498756218905473, 'recall': 0.5014395393474088, 'f1-score': 0.566088840736728, 'support': 4168.0} | {'precision': 0.781447963800905, 'recall': 0.8025092936802974, 'f1-score': 0.7918386061439706, 'support': 2152.0} | {'precision': 0.9166757197175448, 'recall': 0.9145892044222849, 'f1-score': 0.9156312733980794, 'support': 9226.0} | {'precision': 0.8523252232830305, 'recall': 0.9169220574836412, 'f1-score': 0.8834443956745541, 'support': 12073.0} | 0.8445 | {'precision': 0.8000811321730068, 'recall': 0.7838650237334082, 'f1-score': 0.7892507789883332, 'support': 27619.0} | {'precision': 0.8377468489427367, 'recall': 0.8445273181505485, 'f1-score': 0.839166272709442, 'support': 27619.0} |
77
+ | No log | 5.0 | 205 | 0.4511 | {'precision': 0.5708701913186587, 'recall': 0.6657869481765835, 'f1-score': 0.614686011739949, 'support': 4168.0} | {'precision': 0.7117988394584139, 'recall': 0.8550185873605948, 'f1-score': 0.7768629934557737, 'support': 2152.0} | {'precision': 0.9312506998096518, 'recall': 0.9014740949490571, 'f1-score': 0.916120504488627, 'support': 9226.0} | {'precision': 0.9057107276285359, 'recall': 0.8433695021949805, 'f1-score': 0.8734291228822647, 'support': 12073.0} | 0.8369 | {'precision': 0.7799076145538151, 'recall': 0.816412283170304, 'f1-score': 0.7952746581416535, 'support': 27619.0} | {'precision': 0.8486021445756123, 'recall': 0.8368876498062928, 'f1-score': 0.8411187238429554, 'support': 27619.0} |
78
+ | No log | 6.0 | 246 | 0.4568 | {'precision': 0.5821080969144751, 'recall': 0.6744241842610365, 'f1-score': 0.6248749583194397, 'support': 4168.0} | {'precision': 0.7789237668161435, 'recall': 0.8071561338289963, 'f1-score': 0.7927886809675947, 'support': 2152.0} | {'precision': 0.9134450171821306, 'recall': 0.9219596791675699, 'f1-score': 0.9176825979070017, 'support': 9226.0} | {'precision': 0.9038051209103841, 'recall': 0.8420442309285182, 'f1-score': 0.8718322541915012, 'support': 12073.0} | 0.8407 | {'precision': 0.7945705004557834, 'recall': 0.8113960570465302, 'f1-score': 0.8017946228463844, 'support': 27619.0} | {'precision': 0.8487473640392945, 'recall': 0.8407255874579094, 'f1-score': 0.8437210080329368, 'support': 27619.0} |
79
+ | No log | 7.0 | 287 | 0.5084 | {'precision': 0.6148536720044174, 'recall': 0.5343090211132437, 'f1-score': 0.5717586649550707, 'support': 4168.0} | {'precision': 0.8070429329474192, 'recall': 0.7774163568773235, 'f1-score': 0.7919526627218936, 'support': 2152.0} | {'precision': 0.9237677984665936, 'recall': 0.914155647084327, 'f1-score': 0.9189365874918283, 'support': 9226.0} | {'precision': 0.8554009692043145, 'recall': 0.9064855462602501, 'f1-score': 0.8802026782482808, 'support': 12073.0} | 0.8428 | {'precision': 0.8002663431556862, 'recall': 0.7830916428337862, 'f1-score': 0.7907126483542682, 'support': 27619.0} | {'precision': 0.838169524837023, 'recall': 0.8428255910786053, 'f1-score': 0.8397178803143253, 'support': 27619.0} |
80
+ | No log | 8.0 | 328 | 0.5501 | {'precision': 0.5789353438428148, 'recall': 0.6079654510556622, 'f1-score': 0.5930953774136923, 'support': 4168.0} | {'precision': 0.7649107531562909, 'recall': 0.8164498141263941, 'f1-score': 0.7898404135760846, 'support': 2152.0} | {'precision': 0.9487657196087564, 'recall': 0.8831562974203339, 'f1-score': 0.9147861232738297, 'support': 9226.0} | {'precision': 0.8670389253054949, 'recall': 0.8874347718048539, 'f1-score': 0.8771182971756039, 'support': 12073.0} | 0.8383 | {'precision': 0.7899126854783393, 'recall': 0.7987515836018111, 'f1-score': 0.7937100528598027, 'support': 27619.0} | {'precision': 0.8429039403400853, 'recall': 0.8382997212064158, 'f1-score': 0.8400385270357877, 'support': 27619.0} |
81
+ | No log | 9.0 | 369 | 0.5615 | {'precision': 0.5539168741620379, 'recall': 0.6938579654510557, 'f1-score': 0.6160400468633508, 'support': 4168.0} | {'precision': 0.7715914072775099, 'recall': 0.8178438661710037, 'f1-score': 0.7940446650124069, 'support': 2152.0} | {'precision': 0.9404937990670156, 'recall': 0.8959462388900932, 'f1-score': 0.9176797113516515, 'support': 9226.0} | {'precision': 0.8944209039548022, 'recall': 0.8392280294872857, 'f1-score': 0.865945899747874, 'support': 12073.0} | 0.8346 | {'precision': 0.7901057461153415, 'recall': 0.8117190249998596, 'f1-score': 0.7984275807438208, 'support': 27619.0} | {'precision': 0.8488551216049527, 'recall': 0.8345704044317318, 'f1-score': 0.8399115427430235, 'support': 27619.0} |
82
+ | No log | 10.0 | 410 | 0.5889 | {'precision': 0.5963951631302761, 'recall': 0.6271593090211133, 'f1-score': 0.6113904806455386, 'support': 4168.0} | {'precision': 0.7538461538461538, 'recall': 0.8424721189591078, 'f1-score': 0.7956989247311828, 'support': 2152.0} | {'precision': 0.9409945004582951, 'recall': 0.8902016041621504, 'f1-score': 0.9148936170212766, 'support': 9226.0} | {'precision': 0.8769726514087416, 'recall': 0.8791518263894641, 'f1-score': 0.8780608868299139, 'support': 12073.0} | 0.8420 | {'precision': 0.7920521172108667, 'recall': 0.8097462146329589, 'f1-score': 0.800010977306978, 'support': 27619.0} | {'precision': 0.8464230437267781, 'recall': 0.841956624063145, 'f1-score': 0.8437038707660653, 'support': 27619.0} |
83
+ | No log | 11.0 | 451 | 0.5894 | {'precision': 0.5867732872271451, 'recall': 0.6513915547024952, 'f1-score': 0.6173962478681069, 'support': 4168.0} | {'precision': 0.7666963490650045, 'recall': 0.800185873605948, 'f1-score': 0.7830832196452934, 'support': 2152.0} | {'precision': 0.9389980688401681, 'recall': 0.8959462388900932, 'f1-score': 0.9169671085473403, 'support': 9226.0} | {'precision': 0.8817717491417567, 'recall': 0.8722769816946906, 'f1-score': 0.8769986675549633, 'support': 12073.0} | 0.8412 | {'precision': 0.7935598635685186, 'recall': 0.8049501622233067, 'f1-score': 0.798611310903926, 'support': 27619.0} | {'precision': 0.8474031686468898, 'recall': 0.8412324848835946, 'f1-score': 0.8438555380947815, 'support': 27619.0} |
84
+ | No log | 12.0 | 492 | 0.6198 | {'precision': 0.5958633511503603, 'recall': 0.6151631477927063, 'f1-score': 0.6053594616928344, 'support': 4168.0} | {'precision': 0.7789770061004223, 'recall': 0.7713754646840149, 'f1-score': 0.7751575998132151, 'support': 2152.0} | {'precision': 0.9328919313208394, 'recall': 0.901040537611099, 'f1-score': 0.9166896399625074, 'support': 9226.0} | {'precision': 0.8742056379338439, 'recall': 0.8887600430713162, 'f1-score': 0.8814227625580152, 'support': 12073.0} | 0.8424 | {'precision': 0.7954844816263664, 'recall': 0.7940847982897842, 'f1-score': 0.7946573660066429, 'support': 27619.0} | {'precision': 0.8443847565032829, 'recall': 0.8424273145298526, 'f1-score': 0.8432627184833189, 'support': 27619.0} |
85
+ | 0.271 | 13.0 | 533 | 0.6308 | {'precision': 0.5984138428262437, 'recall': 0.5974088291746641, 'f1-score': 0.597910913675111, 'support': 4168.0} | {'precision': 0.7893231649189705, 'recall': 0.7695167286245354, 'f1-score': 0.779294117647059, 'support': 2152.0} | {'precision': 0.9218476357267951, 'recall': 0.9128549750704531, 'f1-score': 0.917329266964383, 'support': 9226.0} | {'precision': 0.8714005235602095, 'recall': 0.8822993456473122, 'f1-score': 0.8768160678273037, 'support': 12073.0} | 0.8407 | {'precision': 0.7952462917580546, 'recall': 0.7905199696292412, 'f1-score': 0.7928375915284641, 'support': 27619.0} | {'precision': 0.8406603119578272, 'recall': 0.8407255874579094, 'f1-score': 0.8406609157922722, 'support': 27619.0} |
86
+ | 0.271 | 14.0 | 574 | 0.6361 | {'precision': 0.6123370110330993, 'recall': 0.5858925143953935, 'f1-score': 0.5988229524276607, 'support': 4168.0} | {'precision': 0.7828622700762674, 'recall': 0.8108736059479554, 'f1-score': 0.7966217758502625, 'support': 2152.0} | {'precision': 0.9273249392533687, 'recall': 0.9100368523737264, 'f1-score': 0.9185995623632386, 'support': 9226.0} | {'precision': 0.8696145124716553, 'recall': 0.8894226787045474, 'f1-score': 0.879407067687646, 'support': 12073.0} | 0.8444 | {'precision': 0.7980346832085977, 'recall': 0.7990564128554057, 'f1-score': 0.798362839582202, 'support': 27619.0} | {'precision': 0.8433070048087172, 'recall': 0.8443824903146385, 'f1-score': 0.8437056091062111, 'support': 27619.0} |
87
+ | 0.271 | 15.0 | 615 | 0.6438 | {'precision': 0.6039084842707341, 'recall': 0.6079654510556622, 'f1-score': 0.6059301769488283, 'support': 4168.0} | {'precision': 0.7791218637992832, 'recall': 0.8080855018587361, 'f1-score': 0.7933394160583942, 'support': 2152.0} | {'precision': 0.936604624929498, 'recall': 0.8999566442662043, 'f1-score': 0.9179149853518324, 'support': 9226.0} | {'precision': 0.8701119584617881, 'recall': 0.8883458958005467, 'f1-score': 0.8791343907537195, 'support': 12073.0} | 0.8437 | {'precision': 0.7974367328653259, 'recall': 0.8010883732452874, 'f1-score': 0.7990797422781937, 'support': 27619.0} | {'precision': 0.8450608913228282, 'recall': 0.8436583511350881, 'f1-score': 0.8441745376482147, 'support': 27619.0} |
88
+
89
+
90
+ ### Framework versions
91
+
92
+ - Transformers 4.37.2
93
+ - Pytorch 2.2.0+cu121
94
+ - Datasets 2.17.0
95
+ - Tokenizers 0.15.2
meta_data/README_s42_e4.md CHANGED
@@ -1,5 +1,4 @@
1
  ---
2
- license: apache-2.0
3
  base_model: allenai/longformer-base-4096
4
  tags:
5
  - generated_from_trainer
@@ -17,12 +16,12 @@ model-index:
17
  name: essays_su_g
18
  type: essays_su_g
19
  config: simple
20
- split: test
21
  args: simple
22
  metrics:
23
  - name: Accuracy
24
  type: accuracy
25
- value: 0.8280482998315956
26
  ---
27
 
28
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -32,14 +31,14 @@ should probably proofread and complete it, then remove this comment. -->
32
 
33
  This model is a fine-tuned version of [allenai/longformer-base-4096](https://huggingface.co/allenai/longformer-base-4096) on the essays_su_g dataset.
34
  It achieves the following results on the evaluation set:
35
- - Loss: 0.4474
36
- - Claim: {'precision': 0.5788206979542719, 'recall': 0.5656161806208843, 'f1-score': 0.5721422624003807, 'support': 4252.0}
37
- - Majorclaim: {'precision': 0.6985815602836879, 'recall': 0.812557286892759, 'f1-score': 0.751271186440678, 'support': 2182.0}
38
- - O: {'precision': 0.93909038572251, 'recall': 0.8793530997304583, 'f1-score': 0.9082405345211582, 'support': 9275.0}
39
- - Premise: {'precision': 0.8599473306200622, 'recall': 0.8832786885245901, 'f1-score': 0.8714568759856051, 'support': 12200.0}
40
- - Accuracy: 0.8280
41
- - Macro avg: {'precision': 0.7691099936451331, 'recall': 0.7852013139421729, 'f1-score': 0.7757777148369556, 'support': 27909.0}
42
- - Weighted avg: {'precision': 0.8308026562535961, 'recall': 0.8280482998315956, 'f1-score': 0.828683488238493, 'support': 27909.0}
43
 
44
  ## Model description
45
 
@@ -68,12 +67,12 @@ The following hyperparameters were used during training:
68
 
69
  ### Training results
70
 
71
- | Training Loss | Epoch | Step | Validation Loss | Claim | Majorclaim | O | Premise | Accuracy | Macro avg | Weighted avg |
72
- |:-------------:|:-----:|:----:|:---------------:|:-------------------------------------------------------------------------------------------------------------------:|:------------------------------------------------------------------------------------------------------------------:|:------------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------------------:|:--------:|:-------------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------------------:|
73
- | No log | 1.0 | 41 | 0.5887 | {'precision': 0.4995083579154376, 'recall': 0.2389463781749765, 'f1-score': 0.32325803372573975, 'support': 4252.0} | {'precision': 0.5970350404312669, 'recall': 0.4060494958753437, 'f1-score': 0.4833606110201855, 'support': 2182.0} | {'precision': 0.8159389073820247, 'recall': 0.898544474393531, 'f1-score': 0.8552516804351173, 'support': 9275.0} | {'precision': 0.7941031247795726, 'recall': 0.9227868852459017, 'f1-score': 0.8536224741251849, 'support': 12200.0} | 0.7701 | {'precision': 0.6766463576270755, 'recall': 0.6165818084224383, 'f1-score': 0.6288731998265569, 'support': 27909.0} | {'precision': 0.7410703172581077, 'recall': 0.7701458310939123, 'f1-score': 0.7444136132792598, 'support': 27909.0} |
74
- | No log | 2.0 | 82 | 0.4737 | {'precision': 0.5664355062413314, 'recall': 0.48024459078080906, 'f1-score': 0.5197912689321624, 'support': 4252.0} | {'precision': 0.707936507936508, 'recall': 0.7153987167736022, 'f1-score': 0.7116480510599499, 'support': 2182.0} | {'precision': 0.9119831504267819, 'recall': 0.8870080862533692, 'f1-score': 0.8993222562308703, 'support': 9275.0} | {'precision': 0.8385838813274201, 'recall': 0.8989344262295081, 'f1-score': 0.8677110530896431, 'support': 12200.0} | 0.8168 | {'precision': 0.7562347614830104, 'recall': 0.7453964550093222, 'f1-score': 0.7496181573281565, 'support': 27909.0} | {'precision': 0.8112998783639159, 'recall': 0.81683327958723, 'f1-score': 0.8130086100235527, 'support': 27909.0} |
75
- | No log | 3.0 | 123 | 0.4448 | {'precision': 0.6023609816713265, 'recall': 0.4560206961429915, 'f1-score': 0.5190737518404497, 'support': 4252.0} | {'precision': 0.7517178195144297, 'recall': 0.7520623281393217, 'f1-score': 0.7518900343642613, 'support': 2182.0} | {'precision': 0.9046644403748788, 'recall': 0.9054447439353099, 'f1-score': 0.9050544239681, 'support': 9275.0} | {'precision': 0.8368874773139746, 'recall': 0.9071311475409836, 'f1-score': 0.8705947136563877, 'support': 12200.0} | 0.8257 | {'precision': 0.7739076797186524, 'recall': 0.7551647289396517, 'f1-score': 0.7616532309572996, 'support': 27909.0} | {'precision': 0.8170223613871673, 'recall': 0.8257193020172704, 'f1-score': 0.8192110407653613, 'support': 27909.0} |
76
- | No log | 4.0 | 164 | 0.4474 | {'precision': 0.5788206979542719, 'recall': 0.5656161806208843, 'f1-score': 0.5721422624003807, 'support': 4252.0} | {'precision': 0.6985815602836879, 'recall': 0.812557286892759, 'f1-score': 0.751271186440678, 'support': 2182.0} | {'precision': 0.93909038572251, 'recall': 0.8793530997304583, 'f1-score': 0.9082405345211582, 'support': 9275.0} | {'precision': 0.8599473306200622, 'recall': 0.8832786885245901, 'f1-score': 0.8714568759856051, 'support': 12200.0} | 0.8280 | {'precision': 0.7691099936451331, 'recall': 0.7852013139421729, 'f1-score': 0.7757777148369556, 'support': 27909.0} | {'precision': 0.8308026562535961, 'recall': 0.8280482998315956, 'f1-score': 0.828683488238493, 'support': 27909.0} |
77
 
78
 
79
  ### Framework versions
 
1
  ---
 
2
  base_model: allenai/longformer-base-4096
3
  tags:
4
  - generated_from_trainer
 
16
  name: essays_su_g
17
  type: essays_su_g
18
  config: simple
19
+ split: train[80%:100%]
20
  args: simple
21
  metrics:
22
  - name: Accuracy
23
  type: accuracy
24
+ value: 0.8299721206415873
25
  ---
26
 
27
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
31
 
32
  This model is a fine-tuned version of [allenai/longformer-base-4096](https://huggingface.co/allenai/longformer-base-4096) on the essays_su_g dataset.
33
  It achieves the following results on the evaluation set:
34
+ - Loss: 0.4321
35
+ - Claim: {'precision': 0.5835557928457021, 'recall': 0.52447216890595, 'f1-score': 0.5524387161991408, 'support': 4168.0}
36
+ - Majorclaim: {'precision': 0.6944444444444444, 'recall': 0.824814126394052, 'f1-score': 0.754035683942226, 'support': 2152.0}
37
+ - O: {'precision': 0.934596507248031, 'recall': 0.8874918707999133, 'f1-score': 0.9104353143937287, 'support': 9226.0}
38
+ - Premise: {'precision': 0.8580758203249442, 'recall': 0.8924045390540877, 'f1-score': 0.8749035689634171, 'support': 12073.0}
39
+ - Accuracy: 0.8300
40
+ - Macro avg: {'precision': 0.7676681412157805, 'recall': 0.7822956762885007, 'f1-score': 0.7729533208746281, 'support': 27619.0}
41
+ - Weighted avg: {'precision': 0.8294594932357695, 'recall': 0.8299721206415873, 'f1-score': 0.8286917107662684, 'support': 27619.0}
42
 
43
  ## Model description
44
 
 
67
 
68
  ### Training results
69
 
70
+ | Training Loss | Epoch | Step | Validation Loss | Claim | Majorclaim | O | Premise | Accuracy | Macro avg | Weighted avg |
71
+ |:-------------:|:-----:|:----:|:---------------:|:--------------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------------------:|:------------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------------------:|:--------:|:-------------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------------------:|
72
+ | No log | 1.0 | 41 | 0.6033 | {'precision': 0.4527056753189617, 'recall': 0.2468809980806142, 'f1-score': 0.31951560316721006, 'support': 4168.0} | {'precision': 0.5835601524224279, 'recall': 0.49814126394052044, 'f1-score': 0.5374780646778641, 'support': 2152.0} | {'precision': 0.8875888965359028, 'recall': 0.8387166702796445, 'f1-score': 0.862460989745876, 'support': 9226.0} | {'precision': 0.7685754850922859, 'recall': 0.9416052348215025, 'f1-score': 0.8463371054198927, 'support': 12073.0} | 0.7678 | {'precision': 0.6731075523423946, 'recall': 0.6313360417805705, 'f1-score': 0.6414479407527107, 'support': 27619.0} | {'precision': 0.7462473548536118, 'recall': 0.7678409790361708, 'f1-score': 0.7481547773024915, 'support': 27619.0} |
73
+ | No log | 2.0 | 82 | 0.4684 | {'precision': 0.5774099318403116, 'recall': 0.42682341650671785, 'f1-score': 0.49082632087184436, 'support': 4168.0} | {'precision': 0.6601866251944012, 'recall': 0.7890334572490706, 'f1-score': 0.7188823031329382, 'support': 2152.0} | {'precision': 0.9429934406678593, 'recall': 0.8570344678083677, 'f1-score': 0.8979615013343932, 'support': 9226.0} | {'precision': 0.8198954421618437, 'recall': 0.9223059720036445, 'f1-score': 0.8680907460824822, 'support': 12073.0} | 0.8153 | {'precision': 0.7501213599661041, 'recall': 0.7487993283919501, 'f1-score': 0.7439402178554144, 'support': 27619.0} | {'precision': 0.8119780357779204, 'recall': 0.815344509214671, 'f1-score': 0.809509801603999, 'support': 27619.0} |
74
+ | No log | 3.0 | 123 | 0.4395 | {'precision': 0.5962599632127529, 'recall': 0.4666506717850288, 'f1-score': 0.5235531628532973, 'support': 4168.0} | {'precision': 0.7146464646464646, 'recall': 0.7890334572490706, 'f1-score': 0.75, 'support': 2152.0} | {'precision': 0.9242167175658862, 'recall': 0.885649252113592, 'f1-score': 0.9045220567886201, 'support': 9226.0} | {'precision': 0.8378995433789954, 'recall': 0.9119522902344074, 'f1-score': 0.873358981477809, 'support': 12073.0} | 0.8264 | {'precision': 0.7682556722010248, 'recall': 0.7633214178455248, 'f1-score': 0.7628585502799315, 'support': 27619.0} | {'precision': 0.820663866978074, 'recall': 0.8263876317028133, 'f1-score': 0.8213676477094007, 'support': 27619.0} |
75
+ | No log | 4.0 | 164 | 0.4321 | {'precision': 0.5835557928457021, 'recall': 0.52447216890595, 'f1-score': 0.5524387161991408, 'support': 4168.0} | {'precision': 0.6944444444444444, 'recall': 0.824814126394052, 'f1-score': 0.754035683942226, 'support': 2152.0} | {'precision': 0.934596507248031, 'recall': 0.8874918707999133, 'f1-score': 0.9104353143937287, 'support': 9226.0} | {'precision': 0.8580758203249442, 'recall': 0.8924045390540877, 'f1-score': 0.8749035689634171, 'support': 12073.0} | 0.8300 | {'precision': 0.7676681412157805, 'recall': 0.7822956762885007, 'f1-score': 0.7729533208746281, 'support': 27619.0} | {'precision': 0.8294594932357695, 'recall': 0.8299721206415873, 'f1-score': 0.8286917107662684, 'support': 27619.0} |
76
 
77
 
78
  ### Framework versions
meta_data/README_s42_e5.md CHANGED
@@ -1,5 +1,4 @@
1
  ---
2
- license: apache-2.0
3
  base_model: allenai/longformer-base-4096
4
  tags:
5
  - generated_from_trainer
@@ -17,12 +16,12 @@ model-index:
17
  name: essays_su_g
18
  type: essays_su_g
19
  config: simple
20
- split: test
21
  args: simple
22
  metrics:
23
  - name: Accuracy
24
  type: accuracy
25
- value: 0.8340320326776308
26
  ---
27
 
28
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -32,14 +31,14 @@ should probably proofread and complete it, then remove this comment. -->
32
 
33
  This model is a fine-tuned version of [allenai/longformer-base-4096](https://huggingface.co/allenai/longformer-base-4096) on the essays_su_g dataset.
34
  It achieves the following results on the evaluation set:
35
- - Loss: 0.4397
36
- - Claim: {'precision': 0.5897372943776087, 'recall': 0.5649106302916275, 'f1-score': 0.5770570570570571, 'support': 4252.0}
37
- - Majorclaim: {'precision': 0.7365996649916248, 'recall': 0.806141154903758, 'f1-score': 0.7698030634573303, 'support': 2182.0}
38
- - O: {'precision': 0.9290423511006817, 'recall': 0.8963881401617251, 'f1-score': 0.9124231782265146, 'support': 9275.0}
39
- - Premise: {'precision': 0.8642291383310665, 'recall': 0.8854098360655738, 'f1-score': 0.8746912830478967, 'support': 12200.0}
40
- - Accuracy: 0.8340
41
- - Macro avg: {'precision': 0.7799021122002454, 'recall': 0.7882124403556711, 'f1-score': 0.7834936454471997, 'support': 27909.0}
42
- - Weighted avg: {'precision': 0.8339706452686643, 'recall': 0.8340320326776308, 'f1-score': 0.8336850307178961, 'support': 27909.0}
43
 
44
  ## Model description
45
 
@@ -68,13 +67,13 @@ The following hyperparameters were used during training:
68
 
69
  ### Training results
70
 
71
- | Training Loss | Epoch | Step | Validation Loss | Claim | Majorclaim | O | Premise | Accuracy | Macro avg | Weighted avg |
72
- |:-------------:|:-----:|:----:|:---------------:|:-------------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------------------:|:------------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------------------:|:--------:|:-------------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------------------:|
73
- | No log | 1.0 | 41 | 0.5888 | {'precision': 0.49844559585492226, 'recall': 0.2262464722483537, 'f1-score': 0.311226140407635, 'support': 4252.0} | {'precision': 0.6139372822299651, 'recall': 0.40375802016498624, 'f1-score': 0.4871440420237766, 'support': 2182.0} | {'precision': 0.8171685569026202, 'recall': 0.9011320754716982, 'f1-score': 0.8570989078603293, 'support': 9275.0} | {'precision': 0.7903744062587315, 'recall': 0.9274590163934426, 'f1-score': 0.8534469754110725, 'support': 12200.0} | 0.7709 | {'precision': 0.6799814603115598, 'recall': 0.6146488960696201, 'f1-score': 0.6272290164257033, 'support': 27909.0} | {'precision': 0.7410085615761669, 'recall': 0.7709341072772224, 'f1-score': 0.7434134981235008, 'support': 27909.0} |
74
- | No log | 2.0 | 82 | 0.4676 | {'precision': 0.574496644295302, 'recall': 0.5032925682031985, 'f1-score': 0.5365425598595963, 'support': 4252.0} | {'precision': 0.6832784184514004, 'recall': 0.7603116406966086, 'f1-score': 0.7197396963123645, 'support': 2182.0} | {'precision': 0.9165271733065506, 'recall': 0.8854986522911051, 'f1-score': 0.9007457775828033, 'support': 9275.0} | {'precision': 0.8488472059398202, 'recall': 0.8902459016393443, 'f1-score': 0.8690538107621524, 'support': 12200.0} | 0.8196 | {'precision': 0.7557873604982683, 'recall': 0.7598371907075642, 'f1-score': 0.7565204611292291, 'support': 27909.0} | {'precision': 0.816596749632328, 'recall': 0.8195564154932101, 'f1-score': 0.8172533792058241, 'support': 27909.0} |
75
- | No log | 3.0 | 123 | 0.4384 | {'precision': 0.6117381489841986, 'recall': 0.44614299153339604, 'f1-score': 0.5159798721610226, 'support': 4252.0} | {'precision': 0.7290375877736472, 'recall': 0.8088909257561869, 'f1-score': 0.7668911579404737, 'support': 2182.0} | {'precision': 0.9303112313937754, 'recall': 0.889487870619946, 'f1-score': 0.9094416579397012, 'support': 9275.0} | {'precision': 0.8289074635697906, 'recall': 0.9185245901639344, 'f1-score': 0.8714180178078463, 'support': 12200.0} | 0.8283 | {'precision': 0.774998607930353, 'recall': 0.7657615945183658, 'f1-score': 0.7659326764622609, 'support': 27909.0} | {'precision': 0.8217126501390813, 'recall': 0.8283349457164355, 'f1-score': 0.8217304137626299, 'support': 27909.0} |
76
- | No log | 4.0 | 164 | 0.4487 | {'precision': 0.5776205218929678, 'recall': 0.6142991533396049, 'f1-score': 0.5953954866651471, 'support': 4252.0} | {'precision': 0.7034400948991696, 'recall': 0.8153070577451879, 'f1-score': 0.7552536616429633, 'support': 2182.0} | {'precision': 0.9331742243436754, 'recall': 0.8852830188679245, 'f1-score': 0.9085979860573199, 'support': 9275.0} | {'precision': 0.8791773778920309, 'recall': 0.8690163934426229, 'f1-score': 0.8740673564450308, 'support': 12200.0} | 0.8314 | {'precision': 0.7733530547569609, 'recall': 0.795976405848835, 'f1-score': 0.7833286227026153, 'support': 27909.0} | {'precision': 0.837439667749803, 'recall': 0.8314163889784657, 'f1-score': 0.8337974548825171, 'support': 27909.0} |
77
- | No log | 5.0 | 205 | 0.4397 | {'precision': 0.5897372943776087, 'recall': 0.5649106302916275, 'f1-score': 0.5770570570570571, 'support': 4252.0} | {'precision': 0.7365996649916248, 'recall': 0.806141154903758, 'f1-score': 0.7698030634573303, 'support': 2182.0} | {'precision': 0.9290423511006817, 'recall': 0.8963881401617251, 'f1-score': 0.9124231782265146, 'support': 9275.0} | {'precision': 0.8642291383310665, 'recall': 0.8854098360655738, 'f1-score': 0.8746912830478967, 'support': 12200.0} | 0.8340 | {'precision': 0.7799021122002454, 'recall': 0.7882124403556711, 'f1-score': 0.7834936454471997, 'support': 27909.0} | {'precision': 0.8339706452686643, 'recall': 0.8340320326776308, 'f1-score': 0.8336850307178961, 'support': 27909.0} |
78
 
79
 
80
  ### Framework versions
 
1
  ---
 
2
  base_model: allenai/longformer-base-4096
3
  tags:
4
  - generated_from_trainer
 
16
  name: essays_su_g
17
  type: essays_su_g
18
  config: simple
19
+ split: train[80%:100%]
20
  args: simple
21
  metrics:
22
  - name: Accuracy
23
  type: accuracy
24
+ value: 0.8379014446576633
25
  ---
26
 
27
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
31
 
32
  This model is a fine-tuned version of [allenai/longformer-base-4096](https://huggingface.co/allenai/longformer-base-4096) on the essays_su_g dataset.
33
  It achieves the following results on the evaluation set:
34
+ - Loss: 0.4267
35
+ - Claim: {'precision': 0.6011011011011012, 'recall': 0.5762955854126679, 'f1-score': 0.58843704066634, 'support': 4168.0}
36
+ - Majorclaim: {'precision': 0.7353560893383903, 'recall': 0.8108736059479554, 'f1-score': 0.7712707182320443, 'support': 2152.0}
37
+ - O: {'precision': 0.9331677579589072, 'recall': 0.8959462388900932, 'f1-score': 0.9141782791417828, 'support': 9226.0}
38
+ - Premise: {'precision': 0.8658005164622337, 'recall': 0.8886772136171622, 'f1-score': 0.8770897200081749, 'support': 12073.0}
39
+ - Accuracy: 0.8379
40
+ - Macro avg: {'precision': 0.7838563662151581, 'recall': 0.7929481609669696, 'f1-score': 0.7877439395120855, 'support': 27619.0}
41
+ - Weighted avg: {'precision': 0.838194397473588, 'recall': 0.8379014446576633, 'f1-score': 0.8376730933108891, 'support': 27619.0}
42
 
43
  ## Model description
44
 
 
67
 
68
  ### Training results
69
 
70
+ | Training Loss | Epoch | Step | Validation Loss | Claim | Majorclaim | O | Premise | Accuracy | Macro avg | Weighted avg |
71
+ |:-------------:|:-----:|:----:|:---------------:|:--------------------------------------------------------------------------------------------------------------------:|:------------------------------------------------------------------------------------------------------------------:|:------------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------------------:|:--------:|:-------------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------------------:|
72
+ | No log | 1.0 | 41 | 0.6166 | {'precision': 0.4200196270853778, 'recall': 0.2053742802303263, 'f1-score': 0.27586206896551724, 'support': 4168.0} | {'precision': 0.6073394495412844, 'recall': 0.46143122676579923, 'f1-score': 0.524425666754687, 'support': 2152.0} | {'precision': 0.897315672254132, 'recall': 0.8297203555170172, 'f1-score': 0.8621951906290477, 'support': 9226.0} | {'precision': 0.7481024975673046, 'recall': 0.9551892653027416, 'f1-score': 0.8390570430733411, 'support': 12073.0} | 0.7616 | {'precision': 0.6681943116120247, 'recall': 0.612928781953971, 'f1-score': 0.6253849923556483, 'support': 27619.0} | {'precision': 0.7374674009360002, 'recall': 0.7616495890510157, 'f1-score': 0.7372788894627758, 'support': 27619.0} |
73
+ | No log | 2.0 | 82 | 0.4575 | {'precision': 0.5743048897411314, 'recall': 0.43114203454894434, 'f1-score': 0.49253117719610806, 'support': 4168.0} | {'precision': 0.7058560572194904, 'recall': 0.7337360594795539, 'f1-score': 0.7195260879471406, 'support': 2152.0} | {'precision': 0.9206993795826283, 'recall': 0.8846737481031867, 'f1-score': 0.9023271239843015, 'support': 9226.0} | {'precision': 0.8243949805796236, 'recall': 0.9141886854965626, 'f1-score': 0.8669730175562625, 'support': 12073.0} | 0.8174 | {'precision': 0.7563138267807185, 'recall': 0.7409351319070618, 'f1-score': 0.7453393516709531, 'support': 27619.0} | {'precision': 0.8095875336596003, 'recall': 0.8173720989174119, 'f1-score': 0.8107869718183696, 'support': 27619.0} |
74
+ | No log | 3.0 | 123 | 0.4417 | {'precision': 0.6082102988836874, 'recall': 0.4052303262955854, 'f1-score': 0.4863930885529157, 'support': 4168.0} | {'precision': 0.7309513560051657, 'recall': 0.7890334572490706, 'f1-score': 0.7588826815642457, 'support': 2152.0} | {'precision': 0.9306548632391329, 'recall': 0.8887925428137872, 'f1-score': 0.9092421134334979, 'support': 9226.0} | {'precision': 0.8175517945725124, 'recall': 0.9282696927027251, 'f1-score': 0.8693999456964432, 'support': 12073.0} | 0.8253 | {'precision': 0.7718420781751247, 'recall': 0.7528315047652921, 'f1-score': 0.7559794573117755, 'support': 27619.0} | {'precision': 0.8169938241061772, 'recall': 0.8253014229334878, 'f1-score': 0.8162980269649668, 'support': 27619.0} |
75
+ | No log | 4.0 | 164 | 0.4247 | {'precision': 0.5918674698795181, 'recall': 0.5657389635316699, 'f1-score': 0.5785083415112856, 'support': 4168.0} | {'precision': 0.7616387337057728, 'recall': 0.7602230483271375, 'f1-score': 0.7609302325581395, 'support': 2152.0} | {'precision': 0.918848167539267, 'recall': 0.9130717537394321, 'f1-score': 0.9159508535391975, 'support': 9226.0} | {'precision': 0.8669534864842926, 'recall': 0.8846185703636213, 'f1-score': 0.8756969498196131, 'support': 12073.0} | 0.8363 | {'precision': 0.7848269644022126, 'recall': 0.7809130839904652, 'f1-score': 0.782771594357059, 'support': 27619.0} | {'precision': 0.8345694197992249, 'recall': 0.8363083384626525, 'f1-score': 0.8353523472178203, 'support': 27619.0} |
76
+ | No log | 5.0 | 205 | 0.4267 | {'precision': 0.6011011011011012, 'recall': 0.5762955854126679, 'f1-score': 0.58843704066634, 'support': 4168.0} | {'precision': 0.7353560893383903, 'recall': 0.8108736059479554, 'f1-score': 0.7712707182320443, 'support': 2152.0} | {'precision': 0.9331677579589072, 'recall': 0.8959462388900932, 'f1-score': 0.9141782791417828, 'support': 9226.0} | {'precision': 0.8658005164622337, 'recall': 0.8886772136171622, 'f1-score': 0.8770897200081749, 'support': 12073.0} | 0.8379 | {'precision': 0.7838563662151581, 'recall': 0.7929481609669696, 'f1-score': 0.7877439395120855, 'support': 27619.0} | {'precision': 0.838194397473588, 'recall': 0.8379014446576633, 'f1-score': 0.8376730933108891, 'support': 27619.0} |
77
 
78
 
79
  ### Framework versions
meta_data/README_s42_e6.md CHANGED
@@ -1,5 +1,4 @@
1
  ---
2
- license: apache-2.0
3
  base_model: allenai/longformer-base-4096
4
  tags:
5
  - generated_from_trainer
@@ -17,12 +16,12 @@ model-index:
17
  name: essays_su_g
18
  type: essays_su_g
19
  config: simple
20
- split: test
21
  args: simple
22
  metrics:
23
  - name: Accuracy
24
  type: accuracy
25
- value: 0.836576014905586
26
  ---
27
 
28
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -32,14 +31,14 @@ should probably proofread and complete it, then remove this comment. -->
32
 
33
  This model is a fine-tuned version of [allenai/longformer-base-4096](https://huggingface.co/allenai/longformer-base-4096) on the essays_su_g dataset.
34
  It achieves the following results on the evaluation set:
35
- - Loss: 0.4521
36
- - Claim: {'precision': 0.5926012643409038, 'recall': 0.5952492944496708, 'f1-score': 0.5939223278188431, 'support': 4252.0}
37
- - Majorclaim: {'precision': 0.746797608881298, 'recall': 0.8015582034830431, 'f1-score': 0.773209549071618, 'support': 2182.0}
38
- - O: {'precision': 0.9330482727579611, 'recall': 0.8940161725067386, 'f1-score': 0.9131152956722828, 'support': 9275.0}
39
- - Premise: {'precision': 0.8684019663147715, 'recall': 0.8832786885245901, 'f1-score': 0.8757771546995001, 'support': 12200.0}
40
- - Accuracy: 0.8366
41
- - Macro avg: {'precision': 0.7852122780737336, 'recall': 0.7935255897410106, 'f1-score': 0.789006081815561, 'support': 27909.0}
42
- - Weighted avg: {'precision': 0.8383596573659686, 'recall': 0.836576014905586, 'f1-score': 0.837225505344309, 'support': 27909.0}
43
 
44
  ## Model description
45
 
@@ -68,14 +67,14 @@ The following hyperparameters were used during training:
68
 
69
  ### Training results
70
 
71
- | Training Loss | Epoch | Step | Validation Loss | Claim | Majorclaim | O | Premise | Accuracy | Macro avg | Weighted avg |
72
- |:-------------:|:-----:|:----:|:---------------:|:---------------------------------------------------------------------------------------------------------------------:|:------------------------------------------------------------------------------------------------------------------:|:------------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------------------:|:--------:|:-------------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------------------:|
73
- | No log | 1.0 | 41 | 0.5884 | {'precision': 0.49868766404199477, 'recall': 0.22342427093132644, 'f1-score': 0.30859184667857725, 'support': 4252.0} | {'precision': 0.6308376575240919, 'recall': 0.3900091659028414, 'f1-score': 0.4820164259416595, 'support': 2182.0} | {'precision': 0.817888247382327, 'recall': 0.9011320754716982, 'f1-score': 0.8574946137273007, 'support': 9275.0} | {'precision': 0.7873372125242449, 'recall': 0.9316393442622951, 'f1-score': 0.8534314461630875, 'support': 12200.0} | 0.7713 | {'precision': 0.6836876953681646, 'recall': 0.6115512141420403, 'f1-score': 0.6253835831276563, 'support': 27909.0} | {'precision': 0.7412782687839407, 'recall': 0.7712565838976674, 'f1-score': 0.7427359833384354, 'support': 27909.0} |
74
- | No log | 2.0 | 82 | 0.4638 | {'precision': 0.5763888888888888, 'recall': 0.5075258701787394, 'f1-score': 0.5397698849424711, 'support': 4252.0} | {'precision': 0.6741528762805359, 'recall': 0.7841429880843263, 'f1-score': 0.725, 'support': 2182.0} | {'precision': 0.9210763341589732, 'recall': 0.8820485175202156, 'f1-score': 0.9011400561766811, 'support': 9275.0} | {'precision': 0.8506865437426442, 'recall': 0.8886885245901639, 'f1-score': 0.8692723992784124, 'support': 12200.0} | 0.8202 | {'precision': 0.7555761607677606, 'recall': 0.7656014750933613, 'f1-score': 0.7587955850993913, 'support': 27909.0} | {'precision': 0.8184874400582042, 'recall': 0.8202371994697051, 'f1-score': 0.8183829174463697, 'support': 27909.0} |
75
- | No log | 3.0 | 123 | 0.4497 | {'precision': 0.6111299626739056, 'recall': 0.4235653809971778, 'f1-score': 0.5003472704542298, 'support': 4252.0} | {'precision': 0.7032967032967034, 'recall': 0.8212648945921174, 'f1-score': 0.7577167019027485, 'support': 2182.0} | {'precision': 0.9438293905139261, 'recall': 0.8732075471698113, 'f1-score': 0.9071460573476703, 'support': 9275.0} | {'precision': 0.8196342080532061, 'recall': 0.9293442622950819, 'f1-score': 0.8710482848692045, 'support': 12200.0} | 0.8252 | {'precision': 0.7694725661344353, 'recall': 0.7618455212635471, 'f1-score': 0.7590645786434633, 'support': 27909.0} | {'precision': 0.8200463271041109, 'recall': 0.8251818409831954, 'f1-score': 0.8177069473942856, 'support': 27909.0} |
76
- | No log | 4.0 | 164 | 0.4504 | {'precision': 0.5816213828142257, 'recall': 0.6192380056444027, 'f1-score': 0.5998405285340016, 'support': 4252.0} | {'precision': 0.6949866054343666, 'recall': 0.8322639780018332, 'f1-score': 0.7574556830031283, 'support': 2182.0} | {'precision': 0.9409930715935335, 'recall': 0.8785983827493261, 'f1-score': 0.908725954836911, 'support': 9275.0} | {'precision': 0.8776116937814848, 'recall': 0.8710655737704918, 'f1-score': 0.8743263811756962, 'support': 12200.0} | 0.8322 | {'precision': 0.7738031884059027, 'recall': 0.8002914850415135, 'f1-score': 0.7850871368874343, 'support': 27909.0} | {'precision': 0.8393023145203343, 'recall': 0.8321688344261707, 'f1-score': 0.8348025837219264, 'support': 27909.0} |
77
- | No log | 5.0 | 205 | 0.4540 | {'precision': 0.5803511891531451, 'recall': 0.6140639698965192, 'f1-score': 0.5967318020797622, 'support': 4252.0} | {'precision': 0.7292703150912107, 'recall': 0.806141154903758, 'f1-score': 0.7657814540705268, 'support': 2182.0} | {'precision': 0.9338842975206612, 'recall': 0.8893800539083558, 'f1-score': 0.9110890214269937, 'support': 9275.0} | {'precision': 0.8739005343197699, 'recall': 0.8713934426229508, 'f1-score': 0.8726451877693413, 'support': 12200.0} | 0.8331 | {'precision': 0.7793515840211966, 'recall': 0.795244655332896, 'f1-score': 0.7865618663366559, 'support': 27909.0} | {'precision': 0.8378044523993523, 'recall': 0.8330646028162958, 'f1-score': 0.8350303027606281, 'support': 27909.0} |
78
- | No log | 6.0 | 246 | 0.4521 | {'precision': 0.5926012643409038, 'recall': 0.5952492944496708, 'f1-score': 0.5939223278188431, 'support': 4252.0} | {'precision': 0.746797608881298, 'recall': 0.8015582034830431, 'f1-score': 0.773209549071618, 'support': 2182.0} | {'precision': 0.9330482727579611, 'recall': 0.8940161725067386, 'f1-score': 0.9131152956722828, 'support': 9275.0} | {'precision': 0.8684019663147715, 'recall': 0.8832786885245901, 'f1-score': 0.8757771546995001, 'support': 12200.0} | 0.8366 | {'precision': 0.7852122780737336, 'recall': 0.7935255897410106, 'f1-score': 0.789006081815561, 'support': 27909.0} | {'precision': 0.8383596573659686, 'recall': 0.836576014905586, 'f1-score': 0.837225505344309, 'support': 27909.0} |
79
 
80
 
81
  ### Framework versions
 
1
  ---
 
2
  base_model: allenai/longformer-base-4096
3
  tags:
4
  - generated_from_trainer
 
16
  name: essays_su_g
17
  type: essays_su_g
18
  config: simple
19
+ split: train[80%:100%]
20
  args: simple
21
  metrics:
22
  - name: Accuracy
23
  type: accuracy
24
+ value: 0.8417393823092798
25
  ---
26
 
27
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
31
 
32
  This model is a fine-tuned version of [allenai/longformer-base-4096](https://huggingface.co/allenai/longformer-base-4096) on the essays_su_g dataset.
33
  It achieves the following results on the evaluation set:
34
+ - Loss: 0.4340
35
+ - Claim: {'precision': 0.6054216867469879, 'recall': 0.5786948176583493, 'f1-score': 0.591756624141315, 'support': 4168.0}
36
+ - Majorclaim: {'precision': 0.7709074733096085, 'recall': 0.8052973977695167, 'f1-score': 0.7877272727272727, 'support': 2152.0}
37
+ - O: {'precision': 0.9340387212967132, 'recall': 0.8994146975937568, 'f1-score': 0.916399779127554, 'support': 9226.0}
38
+ - Premise: {'precision': 0.8641925937774934, 'recall': 0.8949722521328585, 'f1-score': 0.8793131510416666, 'support': 12073.0}
39
+ - Accuracy: 0.8417
40
+ - Macro avg: {'precision': 0.7936401187827008, 'recall': 0.7945947912886202, 'f1-score': 0.793799206759452, 'support': 27619.0}
41
+ - Weighted avg: {'precision': 0.8412045657077691, 'recall': 0.8417393823092798, 'f1-score': 0.8411703079433343, 'support': 27619.0}
42
 
43
  ## Model description
44
 
 
67
 
68
  ### Training results
69
 
70
+ | Training Loss | Epoch | Step | Validation Loss | Claim | Majorclaim | O | Premise | Accuracy | Macro avg | Weighted avg |
71
+ |:-------------:|:-----:|:----:|:---------------:|:--------------------------------------------------------------------------------------------------------------------:|:------------------------------------------------------------------------------------------------------------------:|:------------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------------------:|:--------:|:-------------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------------------:|
72
+ | No log | 1.0 | 41 | 0.6062 | {'precision': 0.46017699115044247, 'recall': 0.21209213051823417, 'f1-score': 0.2903596649696173, 'support': 4168.0} | {'precision': 0.6158224245873648, 'recall': 0.5027881040892194, 'f1-score': 0.5535942696341775, 'support': 2152.0} | {'precision': 0.8984457169568774, 'recall': 0.8332972035551701, 'f1-score': 0.8646460102344935, 'support': 9226.0} | {'precision': 0.751105044201768, 'recall': 0.9570943427482813, 'f1-score': 0.841679717376261, 'support': 12073.0} | 0.7679 | {'precision': 0.6813875442241132, 'recall': 0.6263179452277262, 'f1-score': 0.6375699155536373, 'support': 27619.0} | {'precision': 0.745878523484527, 'recall': 0.7679133929541258, 'f1-score': 0.7437045972031264, 'support': 27619.0} |
73
+ | No log | 2.0 | 82 | 0.4588 | {'precision': 0.5838409746713691, 'recall': 0.43690019193857965, 'f1-score': 0.49979415397282845, 'support': 4168.0} | {'precision': 0.6924335378323109, 'recall': 0.7867100371747212, 'f1-score': 0.736567326517294, 'support': 2152.0} | {'precision': 0.9328012953967152, 'recall': 0.8741599826577064, 'f1-score': 0.9025290957923008, 'support': 9226.0} | {'precision': 0.8268327242896562, 'recall': 0.9183301582042575, 'f1-score': 0.8701828741856997, 'support': 12073.0} | 0.8207 | {'precision': 0.7589771330475128, 'recall': 0.7540250924938162, 'f1-score': 0.7522683626170307, 'support': 27619.0} | {'precision': 0.8150889745292919, 'recall': 0.8206669321843658, 'f1-score': 0.8146814221459026, 'support': 27619.0} |
74
+ | No log | 3.0 | 123 | 0.4322 | {'precision': 0.5977704127749323, 'recall': 0.4760076775431862, 'f1-score': 0.5299853078669694, 'support': 4168.0} | {'precision': 0.7029702970297029, 'recall': 0.824814126394052, 'f1-score': 0.7590335685268335, 'support': 2152.0} | {'precision': 0.9453125, 'recall': 0.8787123347062649, 'f1-score': 0.9107965397146388, 'support': 9226.0} | {'precision': 0.8376392150920524, 'recall': 0.9157624451254867, 'f1-score': 0.8749604305159862, 'support': 12073.0} | 0.8299 | {'precision': 0.7709231062241719, 'recall': 0.7738241459422475, 'f1-score': 0.7686939616561069, 'support': 27619.0} | {'precision': 0.826915186229052, 'recall': 0.8299359136826098, 'f1-score': 0.8258381967372473, 'support': 27619.0} |
75
+ | No log | 4.0 | 164 | 0.4234 | {'precision': 0.6074243579964403, 'recall': 0.5731765834932822, 'f1-score': 0.5898037279348228, 'support': 4168.0} | {'precision': 0.8064516129032258, 'recall': 0.7202602230483272, 'f1-score': 0.7609229258713793, 'support': 2152.0} | {'precision': 0.897263864136702, 'recall': 0.9277043138955127, 'f1-score': 0.9122302158273382, 'support': 9226.0} | {'precision': 0.8721472392638037, 'recall': 0.8831276401888511, 'f1-score': 0.8776030949049305, 'support': 12073.0} | 0.8386 | {'precision': 0.7958217685750428, 'recall': 0.7760671901564933, 'f1-score': 0.7851399911346177, 'support': 27619.0} | {'precision': 0.8354690113781823, 'recall': 0.8385531699192584, 'f1-score': 0.8366467363234656, 'support': 27619.0} |
76
+ | No log | 5.0 | 205 | 0.4306 | {'precision': 0.6152236463510332, 'recall': 0.564299424184261, 'f1-score': 0.5886622450256539, 'support': 4168.0} | {'precision': 0.7490330898152128, 'recall': 0.8099442379182156, 'f1-score': 0.7782987273945077, 'support': 2152.0} | {'precision': 0.9314760727926309, 'recall': 0.8987643615868198, 'f1-score': 0.9148278905560459, 'support': 9226.0} | {'precision': 0.863292750855415, 'recall': 0.89861674811563, 'f1-score': 0.8806006493506493, 'support': 12073.0} | 0.8413 | {'precision': 0.789756389953573, 'recall': 0.7929061929512317, 'f1-score': 0.7905973780817142, 'support': 27619.0} | {'precision': 0.8397300045597481, 'recall': 0.8413048988015497, 'f1-score': 0.8400064034360539, 'support': 27619.0} |
77
+ | No log | 6.0 | 246 | 0.4340 | {'precision': 0.6054216867469879, 'recall': 0.5786948176583493, 'f1-score': 0.591756624141315, 'support': 4168.0} | {'precision': 0.7709074733096085, 'recall': 0.8052973977695167, 'f1-score': 0.7877272727272727, 'support': 2152.0} | {'precision': 0.9340387212967132, 'recall': 0.8994146975937568, 'f1-score': 0.916399779127554, 'support': 9226.0} | {'precision': 0.8641925937774934, 'recall': 0.8949722521328585, 'f1-score': 0.8793131510416666, 'support': 12073.0} | 0.8417 | {'precision': 0.7936401187827008, 'recall': 0.7945947912886202, 'f1-score': 0.793799206759452, 'support': 27619.0} | {'precision': 0.8412045657077691, 'recall': 0.8417393823092798, 'f1-score': 0.8411703079433343, 'support': 27619.0} |
78
 
79
 
80
  ### Framework versions
meta_data/README_s42_e7.md CHANGED
@@ -17,12 +17,12 @@ model-index:
17
  name: essays_su_g
18
  type: essays_su_g
19
  config: simple
20
- split: test
21
  args: simple
22
  metrics:
23
  - name: Accuracy
24
  type: accuracy
25
- value: 0.8374001218245011
26
  ---
27
 
28
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -32,14 +32,14 @@ should probably proofread and complete it, then remove this comment. -->
32
 
33
  This model is a fine-tuned version of [allenai/longformer-base-4096](https://huggingface.co/allenai/longformer-base-4096) on the essays_su_g dataset.
34
  It achieves the following results on the evaluation set:
35
- - Loss: 0.4624
36
- - Claim: {'precision': 0.5906025179856115, 'recall': 0.617826904985889, 'f1-score': 0.6039080459770114, 'support': 4252.0}
37
- - Majorclaim: {'precision': 0.7631810193321616, 'recall': 0.7960586617781852, 'f1-score': 0.7792732166890982, 'support': 2182.0}
38
- - O: {'precision': 0.9296403841858387, 'recall': 0.897466307277628, 'f1-score': 0.913270064183444, 'support': 9275.0}
39
- - Premise: {'precision': 0.8734363502575423, 'recall': 0.875655737704918, 'f1-score': 0.8745446359133887, 'support': 12200.0}
40
- - Accuracy: 0.8374
41
- - Macro avg: {'precision': 0.7892150679402886, 'recall': 0.7967519029366551, 'f1-score': 0.7927489906907357, 'support': 27909.0}
42
- - Weighted avg: {'precision': 0.8404042039171331, 'recall': 0.8374001218245011, 'f1-score': 0.8387335832080923, 'support': 27909.0}
43
 
44
  ## Model description
45
 
@@ -68,15 +68,15 @@ The following hyperparameters were used during training:
68
 
69
  ### Training results
70
 
71
- | Training Loss | Epoch | Step | Validation Loss | Claim | Majorclaim | O | Premise | Accuracy | Macro avg | Weighted avg |
72
- |:-------------:|:-----:|:----:|:---------------:|:------------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------------------:|:------------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------------------:|:--------:|:-------------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------------------:|
73
- | No log | 1.0 | 41 | 0.5869 | {'precision': 0.50130958617077, 'recall': 0.2250705550329257, 'f1-score': 0.310663853270573, 'support': 4252.0} | {'precision': 0.6411985018726591, 'recall': 0.3923006416131989, 'f1-score': 0.48677850440716514, 'support': 2182.0} | {'precision': 0.8172767203513909, 'recall': 0.9027493261455526, 'f1-score': 0.8578893442622951, 'support': 9275.0} | {'precision': 0.7879334257975035, 'recall': 0.931311475409836, 'f1-score': 0.8536438767843726, 'support': 12200.0} | 0.7721 | {'precision': 0.6869295585480809, 'recall': 0.6128579995503783, 'f1-score': 0.6272438946811014, 'support': 27909.0} | {'precision': 0.7425451598936884, 'recall': 0.7720806908165825, 'f1-score': 0.7436480119504476, 'support': 27909.0} |
74
- | No log | 2.0 | 82 | 0.4605 | {'precision': 0.5800683670786222, 'recall': 0.5188146754468486, 'f1-score': 0.5477343265052762, 'support': 4252.0} | {'precision': 0.679080824088748, 'recall': 0.7855178735105408, 'f1-score': 0.7284317892052699, 'support': 2182.0} | {'precision': 0.9250369696280286, 'recall': 0.8767654986522911, 'f1-score': 0.900254621941769, 'support': 9275.0} | {'precision': 0.8497380970995231, 'recall': 0.8909016393442623, 'f1-score': 0.8698331399303749, 'support': 12200.0} | 0.8213 | {'precision': 0.7584810644737304, 'recall': 0.7679999217384857, 'f1-score': 0.7615634693956725, 'support': 27909.0} | {'precision': 0.8203349361458346, 'recall': 0.8212762908022502, 'f1-score': 0.8198154876923865, 'support': 27909.0} |
75
- | No log | 3.0 | 123 | 0.4587 | {'precision': 0.6081277213352685, 'recall': 0.39416745061147695, 'f1-score': 0.478310502283105, 'support': 4252.0} | {'precision': 0.7005473025801408, 'recall': 0.8212648945921174, 'f1-score': 0.7561181434599156, 'support': 2182.0} | {'precision': 0.9445551517993201, 'recall': 0.8687870619946092, 'f1-score': 0.905088172526115, 'support': 9275.0} | {'precision': 0.8125, 'recall': 0.9366393442622951, 'f1-score': 0.8701644837039293, 'support': 12200.0} | 0.8224 | {'precision': 0.7664325439286823, 'recall': 0.7552146878651247, 'f1-score': 0.7524203254932662, 'support': 27909.0} | {'precision': 0.8164965537384401, 'recall': 0.8224228743416102, 'f1-score': 0.8131543783763286, 'support': 27909.0} |
76
- | No log | 4.0 | 164 | 0.4491 | {'precision': 0.5829145728643216, 'recall': 0.6274694261523989, 'f1-score': 0.6043719560539133, 'support': 4252.0} | {'precision': 0.7112758486149044, 'recall': 0.8354720439963337, 'f1-score': 0.7683877766069548, 'support': 2182.0} | {'precision': 0.9357652656621729, 'recall': 0.8905660377358491, 'f1-score': 0.9126063418406806, 'support': 9275.0} | {'precision': 0.881426896667225, 'recall': 0.8627868852459016, 'f1-score': 0.8720072901996521, 'support': 12200.0} | 0.8340 | {'precision': 0.7778456459521561, 'recall': 0.8040735982826209, 'f1-score': 0.7893433411753001, 'support': 27909.0} | {'precision': 0.8407032729174679, 'recall': 0.8340320326776308, 'f1-score': 0.8366234708053201, 'support': 27909.0} |
77
- | No log | 5.0 | 205 | 0.4611 | {'precision': 0.5805860805860806, 'recall': 0.5964252116650988, 'f1-score': 0.588399071925754, 'support': 4252.0} | {'precision': 0.7489102005231038, 'recall': 0.7873510540788268, 'f1-score': 0.7676496872207329, 'support': 2182.0} | {'precision': 0.9323308270676691, 'recall': 0.8957412398921832, 'f1-score': 0.9136698559331353, 'support': 9275.0} | {'precision': 0.8673800259403373, 'recall': 0.8770491803278688, 'f1-score': 0.8721878056732963, 'support': 12200.0} | 0.8335 | {'precision': 0.7823017835292977, 'recall': 0.7891416714909945, 'f1-score': 0.7854766051882296, 'support': 27909.0} | {'precision': 0.8360091300196414, 'recall': 0.8334945716435559, 'f1-score': 0.8345646069131102, 'support': 27909.0} |
78
- | No log | 6.0 | 246 | 0.4642 | {'precision': 0.5962333486449242, 'recall': 0.6105362182502352, 'f1-score': 0.6033000232396003, 'support': 4252.0} | {'precision': 0.7385488447507094, 'recall': 0.8350137488542622, 'f1-score': 0.783824478382448, 'support': 2182.0} | {'precision': 0.9409678526484384, 'recall': 0.8867924528301887, 'f1-score': 0.9130772646536413, 'support': 9275.0} | {'precision': 0.8715477443913501, 'recall': 0.8820491803278688, 'f1-score': 0.8767670183729173, 'support': 12200.0} | 0.8386 | {'precision': 0.7868244476088555, 'recall': 0.8035979000656387, 'f1-score': 0.7942421961621517, 'support': 27909.0} | {'precision': 0.8422751475356695, 'recall': 0.8385825360994661, 'f1-score': 0.8399041873394746, 'support': 27909.0} |
79
- | No log | 7.0 | 287 | 0.4624 | {'precision': 0.5906025179856115, 'recall': 0.617826904985889, 'f1-score': 0.6039080459770114, 'support': 4252.0} | {'precision': 0.7631810193321616, 'recall': 0.7960586617781852, 'f1-score': 0.7792732166890982, 'support': 2182.0} | {'precision': 0.9296403841858387, 'recall': 0.897466307277628, 'f1-score': 0.913270064183444, 'support': 9275.0} | {'precision': 0.8734363502575423, 'recall': 0.875655737704918, 'f1-score': 0.8745446359133887, 'support': 12200.0} | 0.8374 | {'precision': 0.7892150679402886, 'recall': 0.7967519029366551, 'f1-score': 0.7927489906907357, 'support': 27909.0} | {'precision': 0.8404042039171331, 'recall': 0.8374001218245011, 'f1-score': 0.8387335832080923, 'support': 27909.0} |
80
 
81
 
82
  ### Framework versions
 
17
  name: essays_su_g
18
  type: essays_su_g
19
  config: simple
20
+ split: train[80%:100%]
21
  args: simple
22
  metrics:
23
  - name: Accuracy
24
  type: accuracy
25
+ value: 0.8472790470328397
26
  ---
27
 
28
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
32
 
33
  This model is a fine-tuned version of [allenai/longformer-base-4096](https://huggingface.co/allenai/longformer-base-4096) on the essays_su_g dataset.
34
  It achieves the following results on the evaluation set:
35
+ - Loss: 0.4453
36
+ - Claim: {'precision': 0.6184998801821232, 'recall': 0.6192418426103646, 'f1-score': 0.6188706390121088, 'support': 4168.0}
37
+ - Majorclaim: {'precision': 0.7669435942282467, 'recall': 0.8150557620817844, 'f1-score': 0.7902680783960352, 'support': 2152.0}
38
+ - O: {'precision': 0.9382436260623229, 'recall': 0.897463689572946, 'f1-score': 0.9174006980222702, 'support': 9226.0}
39
+ - Premise: {'precision': 0.8744932706340198, 'recall': 0.8933984925039344, 'f1-score': 0.8838447986233458, 'support': 12073.0}
40
+ - Accuracy: 0.8473
41
+ - Macro avg: {'precision': 0.7995450927766782, 'recall': 0.8062899466922574, 'f1-score': 0.80259605351344, 'support': 27619.0}
42
+ - Weighted avg: {'precision': 0.8487766778592197, 'recall': 0.8472790470328397, 'f1-score': 0.8477753293690524, 'support': 27619.0}
43
 
44
  ## Model description
45
 
 
68
 
69
  ### Training results
70
 
71
+ | Training Loss | Epoch | Step | Validation Loss | Claim | Majorclaim | O | Premise | Accuracy | Macro avg | Weighted avg |
72
+ |:-------------:|:-----:|:----:|:---------------:|:------------------------------------------------------------------------------------------------------------------:|:------------------------------------------------------------------------------------------------------------------:|:------------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------------------:|:--------:|:-------------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------------------:|
73
+ | No log | 1.0 | 41 | 0.5604 | {'precision': 0.4942473633748802, 'recall': 0.2473608445297505, 'f1-score': 0.3297089862488008, 'support': 4168.0} | {'precision': 0.5518606492478226, 'recall': 0.6477695167286245, 'f1-score': 0.5959811885421119, 'support': 2152.0} | {'precision': 0.9073273343461493, 'recall': 0.8415347929763711, 'f1-score': 0.8731934994095485, 'support': 9226.0} | {'precision': 0.784083044982699, 'recall': 0.9384577155636544, 'f1-score': 0.8543528258492629, 'support': 12073.0} | 0.7791 | {'precision': 0.6843795979878877, 'recall': 0.6687807174496001, 'f1-score': 0.663309125012431, 'support': 27619.0} | {'precision': 0.7634187956291506, 'recall': 0.7791375502371556, 'f1-score': 0.761340507058846, 'support': 27619.0} |
74
+ | No log | 2.0 | 82 | 0.4518 | {'precision': 0.577831617201696, 'recall': 0.45777351247600767, 'f1-score': 0.510843373493976, 'support': 4168.0} | {'precision': 0.6824378508420208, 'recall': 0.7908921933085502, 'f1-score': 0.7326732673267327, 'support': 2152.0} | {'precision': 0.9419194900247905, 'recall': 0.8648384998916107, 'f1-score': 0.9017347573034978, 'support': 9226.0} | {'precision': 0.8292390653085681, 'recall': 0.917087716391949, 'f1-score': 0.8709537856440511, 'support': 12073.0} | 0.8205 | {'precision': 0.7578570058442688, 'recall': 0.7576479805170293, 'f1-score': 0.7540512959420644, 'support': 27619.0} | {'precision': 0.8175010277688459, 'recall': 0.8204858973894783, 'f1-score': 0.8161170924715856, 'support': 27619.0} |
75
+ | No log | 3.0 | 123 | 0.4276 | {'precision': 0.5879345603271984, 'recall': 0.5518234165067178, 'f1-score': 0.5693069306930693, 'support': 4168.0} | {'precision': 0.6929858183211959, 'recall': 0.8401486988847584, 'f1-score': 0.7595043058181055, 'support': 2152.0} | {'precision': 0.944093567251462, 'recall': 0.8749187079991328, 'f1-score': 0.9081908190819081, 'support': 9226.0} | {'precision': 0.8596589097864201, 'recall': 0.8934813219580883, 'f1-score': 0.8762438568701515, 'support': 12073.0} | 0.8316 | {'precision': 0.7711682139215691, 'recall': 0.7900930363371743, 'f1-score': 0.7783114781158085, 'support': 27619.0} | {'precision': 0.8338711031458205, 'recall': 0.831565226836598, 'f1-score': 0.8314995160611283, 'support': 27619.0} |
76
+ | No log | 4.0 | 164 | 0.4280 | {'precision': 0.6108695652173913, 'recall': 0.5393474088291746, 'f1-score': 0.5728848114169215, 'support': 4168.0} | {'precision': 0.803450078410873, 'recall': 0.7142193308550185, 'f1-score': 0.7562115621156212, 'support': 2152.0} | {'precision': 0.9037745879851143, 'recall': 0.921309343160633, 'f1-score': 0.9124577317374268, 'support': 9226.0} | {'precision': 0.8595990808969178, 'recall': 0.89861674811563, 'f1-score': 0.8786749817769499, 'support': 12073.0} | 0.8376 | {'precision': 0.7944233281275741, 'recall': 0.7683732077401141, 'f1-score': 0.7800572717617298, 'support': 27619.0} | {'precision': 0.832444801368096, 'recall': 0.8376117889858431, 'f1-score': 0.8342709462203975, 'support': 27619.0} |
77
+ | No log | 5.0 | 205 | 0.4388 | {'precision': 0.6131295414683037, 'recall': 0.5870921305182342, 'f1-score': 0.5998284103444048, 'support': 4168.0} | {'precision': 0.746058798466127, 'recall': 0.8136617100371747, 'f1-score': 0.7783951989330962, 'support': 2152.0} | {'precision': 0.935686543294494, 'recall': 0.8878170388033817, 'f1-score': 0.911123470522803, 'support': 9226.0} | {'precision': 0.8646922647082302, 'recall': 0.8972086473950137, 'f1-score': 0.8806504065040651, 'support': 12073.0} | 0.8408 | {'precision': 0.7898917869842887, 'recall': 0.796444881688451, 'f1-score': 0.7924993715760923, 'support': 27619.0} | {'precision': 0.841200486020365, 'recall': 0.8407617944168869, 'f1-score': 0.840483318700404, 'support': 27619.0} |
78
+ | No log | 6.0 | 246 | 0.4455 | {'precision': 0.61596495497688, 'recall': 0.6072456813819578, 'f1-score': 0.6115742418750755, 'support': 4168.0} | {'precision': 0.7737881508078994, 'recall': 0.8011152416356877, 'f1-score': 0.7872146118721461, 'support': 2152.0} | {'precision': 0.9405251141552512, 'recall': 0.8930197268588771, 'f1-score': 0.9161570110085622, 'support': 9226.0} | {'precision': 0.8682319118351701, 'recall': 0.9005218255611696, 'f1-score': 0.8840821305143322, 'support': 12073.0} | 0.8460 | {'precision': 0.7996275329438002, 'recall': 0.800475618859423, 'f1-score': 0.7997569988175289, 'support': 27619.0} | {'precision': 0.8469525546784674, 'recall': 0.8460118034686267, 'f1-score': 0.846124603720218, 'support': 27619.0} |
79
+ | No log | 7.0 | 287 | 0.4453 | {'precision': 0.6184998801821232, 'recall': 0.6192418426103646, 'f1-score': 0.6188706390121088, 'support': 4168.0} | {'precision': 0.7669435942282467, 'recall': 0.8150557620817844, 'f1-score': 0.7902680783960352, 'support': 2152.0} | {'precision': 0.9382436260623229, 'recall': 0.897463689572946, 'f1-score': 0.9174006980222702, 'support': 9226.0} | {'precision': 0.8744932706340198, 'recall': 0.8933984925039344, 'f1-score': 0.8838447986233458, 'support': 12073.0} | 0.8473 | {'precision': 0.7995450927766782, 'recall': 0.8062899466922574, 'f1-score': 0.80259605351344, 'support': 27619.0} | {'precision': 0.8487766778592197, 'recall': 0.8472790470328397, 'f1-score': 0.8477753293690524, 'support': 27619.0} |
80
 
81
 
82
  ### Framework versions
meta_data/README_s42_e8.md ADDED
@@ -0,0 +1,88 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ base_model: allenai/longformer-base-4096
4
+ tags:
5
+ - generated_from_trainer
6
+ datasets:
7
+ - essays_su_g
8
+ metrics:
9
+ - accuracy
10
+ model-index:
11
+ - name: longformer-simple
12
+ results:
13
+ - task:
14
+ name: Token Classification
15
+ type: token-classification
16
+ dataset:
17
+ name: essays_su_g
18
+ type: essays_su_g
19
+ config: simple
20
+ split: train[80%:100%]
21
+ args: simple
22
+ metrics:
23
+ - name: Accuracy
24
+ type: accuracy
25
+ value: 0.8421376588580325
26
+ ---
27
+
28
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
29
+ should probably proofread and complete it, then remove this comment. -->
30
+
31
+ # longformer-simple
32
+
33
+ This model is a fine-tuned version of [allenai/longformer-base-4096](https://huggingface.co/allenai/longformer-base-4096) on the essays_su_g dataset.
34
+ It achieves the following results on the evaluation set:
35
+ - Loss: 0.4683
36
+ - Claim: {'precision': 0.5930310475765022, 'recall': 0.636996161228407, 'f1-score': 0.6142278773857722, 'support': 4168.0}
37
+ - Majorclaim: {'precision': 0.7712014134275619, 'recall': 0.8113382899628253, 'f1-score': 0.7907608695652174, 'support': 2152.0}
38
+ - O: {'precision': 0.9350634632819583, 'recall': 0.8943203988727509, 'f1-score': 0.9142382271468144, 'support': 9226.0}
39
+ - Premise: {'precision': 0.8799568607930978, 'recall': 0.8785720202103868, 'f1-score': 0.8792638952211216, 'support': 12073.0}
40
+ - Accuracy: 0.8421
41
+ - Macro avg: {'precision': 0.7948131962697801, 'recall': 0.8053067175685925, 'f1-score': 0.7996227173297313, 'support': 27619.0}
42
+ - Weighted avg: {'precision': 0.846590880936652, 'recall': 0.8421376588580325, 'f1-score': 0.8440542407367884, 'support': 27619.0}
43
+
44
+ ## Model description
45
+
46
+ More information needed
47
+
48
+ ## Intended uses & limitations
49
+
50
+ More information needed
51
+
52
+ ## Training and evaluation data
53
+
54
+ More information needed
55
+
56
+ ## Training procedure
57
+
58
+ ### Training hyperparameters
59
+
60
+ The following hyperparameters were used during training:
61
+ - learning_rate: 2e-05
62
+ - train_batch_size: 8
63
+ - eval_batch_size: 8
64
+ - seed: 42
65
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
66
+ - lr_scheduler_type: linear
67
+ - num_epochs: 8
68
+
69
+ ### Training results
70
+
71
+ | Training Loss | Epoch | Step | Validation Loss | Claim | Majorclaim | O | Premise | Accuracy | Macro avg | Weighted avg |
72
+ |:-------------:|:-----:|:----:|:---------------:|:--------------------------------------------------------------------------------------------------------------------:|:------------------------------------------------------------------------------------------------------------------:|:------------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------------------:|:--------:|:-------------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------------------:|
73
+ | No log | 1.0 | 41 | 0.5650 | {'precision': 0.5066032752245113, 'recall': 0.23008637236084453, 'f1-score': 0.31644943078699883, 'support': 4168.0} | {'precision': 0.5405718701700154, 'recall': 0.650092936802974, 'f1-score': 0.5902953586497891, 'support': 2152.0} | {'precision': 0.9117542823390431, 'recall': 0.8365488835898548, 'f1-score': 0.8725340568650726, 'support': 9226.0} | {'precision': 0.7800040891433244, 'recall': 0.9479831027913526, 'f1-score': 0.8558289089957376, 'support': 12073.0} | 0.7792 | {'precision': 0.6847333792192236, 'recall': 0.6661778238862565, 'f1-score': 0.6587769388243996, 'support': 27619.0} | {'precision': 0.7640996231879866, 'recall': 0.7792099641551106, 'f1-score': 0.7593214260573249, 'support': 27619.0} |
74
+ | No log | 2.0 | 82 | 0.4458 | {'precision': 0.5965486462362393, 'recall': 0.4810460652591171, 'f1-score': 0.5326072519590915, 'support': 4168.0} | {'precision': 0.7099605089951733, 'recall': 0.7518587360594795, 'f1-score': 0.7303091852854885, 'support': 2152.0} | {'precision': 0.9064716795809232, 'recall': 0.9002818122696726, 'f1-score': 0.9033661428027624, 'support': 9226.0} | {'precision': 0.848392634207241, 'recall': 0.9006046550153235, 'f1-score': 0.8737193137530637, 'support': 12073.0} | 0.8256 | {'precision': 0.7653433672548942, 'recall': 0.7584478171508982, 'f1-score': 0.7600004734501016, 'support': 27619.0} | {'precision': 0.8190014758487952, 'recall': 0.8255910786053079, 'f1-score': 0.8209711322400842, 'support': 27619.0} |
75
+ | No log | 3.0 | 123 | 0.4332 | {'precision': 0.5755813953488372, 'recall': 0.5700575815738963, 'f1-score': 0.5728061716489875, 'support': 4168.0} | {'precision': 0.6984323432343235, 'recall': 0.7867100371747212, 'f1-score': 0.7399475524475525, 'support': 2152.0} | {'precision': 0.9506590881605999, 'recall': 0.8520485584218513, 'f1-score': 0.8986567590740212, 'support': 9226.0} | {'precision': 0.853180184403813, 'recall': 0.9044148099064027, 'f1-score': 0.8780507418278317, 'support': 12073.0} | 0.8273 | {'precision': 0.7694632527868934, 'recall': 0.7783077467692179, 'f1-score': 0.7723653062495982, 'support': 27619.0} | {'precision': 0.8317924172537436, 'recall': 0.8272928056772512, 'f1-score': 0.8281088063146546, 'support': 27619.0} |
76
+ | No log | 4.0 | 164 | 0.4213 | {'precision': 0.6149187998898982, 'recall': 0.5359884836852208, 'f1-score': 0.5727470837072169, 'support': 4168.0} | {'precision': 0.7890625, 'recall': 0.7509293680297398, 'f1-score': 0.7695238095238095, 'support': 2152.0} | {'precision': 0.90938406965495, 'recall': 0.9169737697810535, 'f1-score': 0.9131631496572938, 'support': 9226.0} | {'precision': 0.8585674713098536, 'recall': 0.898533918661476, 'f1-score': 0.8780961631860126, 'support': 12073.0} | 0.8385 | {'precision': 0.7929832102136755, 'recall': 0.7756063850393726, 'f1-score': 0.7833825515185833, 'support': 27619.0} | {'precision': 0.8333577090300708, 'recall': 0.8384807560013035, 'f1-score': 0.8352700416332903, 'support': 27619.0} |
77
+ | No log | 5.0 | 205 | 0.4305 | {'precision': 0.5943820224719101, 'recall': 0.6345969289827256, 'f1-score': 0.613831515432815, 'support': 4168.0} | {'precision': 0.7417763157894737, 'recall': 0.8382899628252788, 'f1-score': 0.7870855148342057, 'support': 2152.0} | {'precision': 0.9332355926468929, 'recall': 0.8969217429004986, 'f1-score': 0.9147183993809761, 'support': 9226.0} | {'precision': 0.8866048862679022, 'recall': 0.8716971755156133, 'f1-score': 0.8790878336048114, 'support': 12073.0} | 0.8417 | {'precision': 0.7889997042940448, 'recall': 0.8103764525560291, 'f1-score': 0.7986808158132019, 'support': 27619.0} | {'precision': 0.8467974680804694, 'recall': 0.8417393823092798, 'f1-score': 0.8437914896284063, 'support': 27619.0} |
78
+ | No log | 6.0 | 246 | 0.4474 | {'precision': 0.6243768693918246, 'recall': 0.6010076775431862, 'f1-score': 0.6124694376528118, 'support': 4168.0} | {'precision': 0.782648401826484, 'recall': 0.7964684014869888, 'f1-score': 0.7894979272224781, 'support': 2152.0} | {'precision': 0.9337994812225104, 'recall': 0.897463689572946, 'f1-score': 0.9152710993201791, 'support': 9226.0} | {'precision': 0.8685258964143426, 'recall': 0.9028410502774786, 'f1-score': 0.8853510945051374, 'support': 12073.0} | 0.8472 | {'precision': 0.8023376622137904, 'recall': 0.7994452047201499, 'f1-score': 0.8006473896751517, 'support': 27619.0} | {'precision': 0.8467942109969572, 'recall': 0.8472066331148846, 'f1-score': 0.8466963714040403, 'support': 27619.0} |
79
+ | No log | 7.0 | 287 | 0.4659 | {'precision': 0.6118966357874208, 'recall': 0.6022072936660269, 'f1-score': 0.6070133010882709, 'support': 4168.0} | {'precision': 0.7918010133578995, 'recall': 0.7987918215613383, 'f1-score': 0.7952810548230396, 'support': 2152.0} | {'precision': 0.9346397825101949, 'recall': 0.8943203988727509, 'f1-score': 0.9140356707654813, 'support': 9226.0} | {'precision': 0.8677104968844863, 'recall': 0.8996935310196306, 'f1-score': 0.8834126306372251, 'support': 12073.0} | 0.8451 | {'precision': 0.8015119821350003, 'recall': 0.7987532612799366, 'f1-score': 0.7999356643285043, 'support': 27619.0} | {'precision': 0.845548224810226, 'recall': 0.8451428364531663, 'f1-score': 0.845063545279722, 'support': 27619.0} |
80
+ | No log | 8.0 | 328 | 0.4683 | {'precision': 0.5930310475765022, 'recall': 0.636996161228407, 'f1-score': 0.6142278773857722, 'support': 4168.0} | {'precision': 0.7712014134275619, 'recall': 0.8113382899628253, 'f1-score': 0.7907608695652174, 'support': 2152.0} | {'precision': 0.9350634632819583, 'recall': 0.8943203988727509, 'f1-score': 0.9142382271468144, 'support': 9226.0} | {'precision': 0.8799568607930978, 'recall': 0.8785720202103868, 'f1-score': 0.8792638952211216, 'support': 12073.0} | 0.8421 | {'precision': 0.7948131962697801, 'recall': 0.8053067175685925, 'f1-score': 0.7996227173297313, 'support': 27619.0} | {'precision': 0.846590880936652, 'recall': 0.8421376588580325, 'f1-score': 0.8440542407367884, 'support': 27619.0} |
81
+
82
+
83
+ ### Framework versions
84
+
85
+ - Transformers 4.37.2
86
+ - Pytorch 2.2.0+cu121
87
+ - Datasets 2.17.0
88
+ - Tokenizers 0.15.2
meta_data/README_s42_e9.md ADDED
@@ -0,0 +1,89 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ base_model: allenai/longformer-base-4096
4
+ tags:
5
+ - generated_from_trainer
6
+ datasets:
7
+ - essays_su_g
8
+ metrics:
9
+ - accuracy
10
+ model-index:
11
+ - name: longformer-simple
12
+ results:
13
+ - task:
14
+ name: Token Classification
15
+ type: token-classification
16
+ dataset:
17
+ name: essays_su_g
18
+ type: essays_su_g
19
+ config: simple
20
+ split: train[80%:100%]
21
+ args: simple
22
+ metrics:
23
+ - name: Accuracy
24
+ type: accuracy
25
+ value: 0.8420290379811
26
+ ---
27
+
28
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
29
+ should probably proofread and complete it, then remove this comment. -->
30
+
31
+ # longformer-simple
32
+
33
+ This model is a fine-tuned version of [allenai/longformer-base-4096](https://huggingface.co/allenai/longformer-base-4096) on the essays_su_g dataset.
34
+ It achieves the following results on the evaluation set:
35
+ - Loss: 0.4966
36
+ - Claim: {'precision': 0.5958668197474167, 'recall': 0.6226007677543186, 'f1-score': 0.6089405139035551, 'support': 4168.0}
37
+ - Majorclaim: {'precision': 0.7666666666666667, 'recall': 0.8122676579925651, 'f1-score': 0.7888086642599278, 'support': 2152.0}
38
+ - O: {'precision': 0.934957507082153, 'recall': 0.8943203988727509, 'f1-score': 0.9141875796354773, 'support': 9226.0}
39
+ - Premise: {'precision': 0.8768813224771774, 'recall': 0.8831276401888511, 'f1-score': 0.8799933971607792, 'support': 12073.0}
40
+ - Accuracy: 0.8420
41
+ - Macro avg: {'precision': 0.7935930789933534, 'recall': 0.8030791162021215, 'f1-score': 0.7979825387399347, 'support': 27619.0}
42
+ - Weighted avg: {'precision': 0.8452856996263733, 'recall': 0.8420290379811, 'f1-score': 0.843406176946174, 'support': 27619.0}
43
+
44
+ ## Model description
45
+
46
+ More information needed
47
+
48
+ ## Intended uses & limitations
49
+
50
+ More information needed
51
+
52
+ ## Training and evaluation data
53
+
54
+ More information needed
55
+
56
+ ## Training procedure
57
+
58
+ ### Training hyperparameters
59
+
60
+ The following hyperparameters were used during training:
61
+ - learning_rate: 2e-05
62
+ - train_batch_size: 8
63
+ - eval_batch_size: 8
64
+ - seed: 42
65
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
66
+ - lr_scheduler_type: linear
67
+ - num_epochs: 9
68
+
69
+ ### Training results
70
+
71
+ | Training Loss | Epoch | Step | Validation Loss | Claim | Majorclaim | O | Premise | Accuracy | Macro avg | Weighted avg |
72
+ |:-------------:|:-----:|:----:|:---------------:|:-------------------------------------------------------------------------------------------------------------------:|:------------------------------------------------------------------------------------------------------------------:|:------------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------------------:|:--------:|:-------------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------------------:|
73
+ | No log | 1.0 | 41 | 0.5668 | {'precision': 0.5033522434244456, 'recall': 0.23416506717850288, 'f1-score': 0.3196332077943344, 'support': 4168.0} | {'precision': 0.5278396436525612, 'recall': 0.6607806691449815, 'f1-score': 0.58687577383409, 'support': 2152.0} | {'precision': 0.9158374295648004, 'recall': 0.8279861261651853, 'f1-score': 0.8696988671941708, 'support': 9226.0} | {'precision': 0.7802663024923182, 'recall': 0.9464921726165825, 'f1-score': 0.8553783965865709, 'support': 12073.0} | 0.7771 | {'precision': 0.6818239047835313, 'recall': 0.6673560087763131, 'f1-score': 0.6578965613522916, 'support': 27619.0} | {'precision': 0.76409552333133, 'recall': 0.7771461674933923, 'f1-score': 0.7583914336543988, 'support': 27619.0} |
74
+ | No log | 2.0 | 82 | 0.4435 | {'precision': 0.5961424332344214, 'recall': 0.4820057581573896, 'f1-score': 0.5330326346511012, 'support': 4168.0} | {'precision': 0.7147918511957484, 'recall': 0.75, 'f1-score': 0.7319727891156461, 'support': 2152.0} | {'precision': 0.9080421885299934, 'recall': 0.8958378495556037, 'f1-score': 0.9018987341772151, 'support': 9226.0} | {'precision': 0.8463030491116456, 'recall': 0.9035036859107098, 'f1-score': 0.8739684320166653, 'support': 12073.0} | 0.8254 | {'precision': 0.7663198805179522, 'recall': 0.7578368234059257, 'f1-score': 0.7602181474901569, 'support': 27619.0} | {'precision': 0.8189278275389019, 'recall': 0.8253738368514428, 'f1-score': 0.8207836657612095, 'support': 27619.0} |
75
+ | No log | 3.0 | 123 | 0.4314 | {'precision': 0.573660177841865, 'recall': 0.5726967370441459, 'f1-score': 0.5731780525873454, 'support': 4168.0} | {'precision': 0.6975589573851882, 'recall': 0.783457249070632, 'f1-score': 0.7380170715692711, 'support': 2152.0} | {'precision': 0.9517867958812841, 'recall': 0.8516150010838933, 'f1-score': 0.8989188261541102, 'support': 9226.0} | {'precision': 0.8546066009698108, 'recall': 0.9050774455396339, 'f1-score': 0.8791182267991471, 'support': 12073.0} | 0.8276 | {'precision': 0.769403133019537, 'recall': 0.7782116081845762, 'f1-score': 0.7723080442774685, 'support': 27619.0} | {'precision': 0.8324346634507792, 'recall': 0.8275824613490713, 'f1-score': 0.8285686774845233, 'support': 27619.0} |
76
+ | No log | 4.0 | 164 | 0.4209 | {'precision': 0.6254607314998583, 'recall': 0.5292706333973128, 'f1-score': 0.5733593242365173, 'support': 4168.0} | {'precision': 0.7988077496274217, 'recall': 0.7472118959107806, 'f1-score': 0.7721488595438176, 'support': 2152.0} | {'precision': 0.9066852367688022, 'recall': 0.917298937784522, 'f1-score': 0.9119612068965518, 'support': 9226.0} | {'precision': 0.8564927422518634, 'recall': 0.904166321543941, 'f1-score': 0.8796841002498186, 'support': 12073.0} | 0.8397 | {'precision': 0.7968616150369864, 'recall': 0.7744869471591391, 'f1-score': 0.7842883727316763, 'support': 27619.0} | {'precision': 0.8338994705719012, 'recall': 0.8397479995655165, 'f1-score': 0.83585959833085, 'support': 27619.0} |
77
+ | No log | 5.0 | 205 | 0.4309 | {'precision': 0.5909090909090909, 'recall': 0.6487523992322457, 'f1-score': 0.6184812442817932, 'support': 4168.0} | {'precision': 0.7423312883435583, 'recall': 0.8434014869888475, 'f1-score': 0.7896454209266913, 'support': 2152.0} | {'precision': 0.9306642809214941, 'recall': 0.9020160416215044, 'f1-score': 0.916116248348745, 'support': 9226.0} | {'precision': 0.8933596431022649, 'recall': 0.8625031061045307, 'f1-score': 0.8776602469552024, 'support': 12073.0} | 0.8420 | {'precision': 0.7893160758191021, 'recall': 0.8141682584867821, 'f1-score': 0.8004757901281081, 'support': 27619.0} | {'precision': 0.8484103570143661, 'recall': 0.841956624063145, 'f1-score': 0.8445355530886866, 'support': 27619.0} |
78
+ | No log | 6.0 | 246 | 0.4454 | {'precision': 0.6128436128436129, 'recall': 0.6365163147792706, 'f1-score': 0.6244556902436155, 'support': 4168.0} | {'precision': 0.7766771724448447, 'recall': 0.8015799256505576, 'f1-score': 0.7889320832380518, 'support': 2152.0} | {'precision': 0.9283890307538581, 'recall': 0.9063516150010839, 'f1-score': 0.9172379751000932, 'support': 9226.0} | {'precision': 0.8828552478859227, 'recall': 0.8820508572848504, 'f1-score': 0.8824528692769836, 'support': 12073.0} | 0.8468 | {'precision': 0.8001912659820596, 'recall': 0.8066246781789406, 'f1-score': 0.803269654464686, 'support': 27619.0} | {'precision': 0.8490448625545937, 'recall': 0.8468445635251095, 'f1-score': 0.8478512693840529, 'support': 27619.0} |
79
+ | No log | 7.0 | 287 | 0.4743 | {'precision': 0.6026775041836003, 'recall': 0.6048464491362764, 'f1-score': 0.6037600287390732, 'support': 4168.0} | {'precision': 0.7918405192396848, 'recall': 0.7936802973977695, 'f1-score': 0.7927593409143652, 'support': 2152.0} | {'precision': 0.9339356295878035, 'recall': 0.8963797962280512, 'f1-score': 0.9147724130302528, 'support': 9226.0} | {'precision': 0.8678364455891823, 'recall': 0.8930671746873188, 'f1-score': 0.8802710535983997, 'support': 12073.0} | 0.8429 | {'precision': 0.7990725246500678, 'recall': 0.796993429362354, 'f1-score': 0.7978907090705227, 'support': 27619.0} | {'precision': 0.8439798747607198, 'recall': 0.8429342119555379, 'f1-score': 0.8432489450792122, 'support': 27619.0} |
80
+ | No log | 8.0 | 328 | 0.4908 | {'precision': 0.5787159190853123, 'recall': 0.6314779270633397, 'f1-score': 0.6039467645709041, 'support': 4168.0} | {'precision': 0.7667689609820254, 'recall': 0.8127323420074349, 'f1-score': 0.7890818858560795, 'support': 2152.0} | {'precision': 0.9358756100329134, 'recall': 0.8937784522003035, 'f1-score': 0.9143427399234906, 'support': 9226.0} | {'precision': 0.878620919943234, 'recall': 0.8717800049697673, 'f1-score': 0.8751870946283054, 'support': 12073.0} | 0.8383 | {'precision': 0.7899953525108712, 'recall': 0.8024421815602114, 'f1-score': 0.7956396212446948, 'support': 27619.0} | {'precision': 0.8437725297591957, 'recall': 0.8382635142474384, 'f1-score': 0.8406247237436355, 'support': 27619.0} |
81
+ | No log | 9.0 | 369 | 0.4966 | {'precision': 0.5958668197474167, 'recall': 0.6226007677543186, 'f1-score': 0.6089405139035551, 'support': 4168.0} | {'precision': 0.7666666666666667, 'recall': 0.8122676579925651, 'f1-score': 0.7888086642599278, 'support': 2152.0} | {'precision': 0.934957507082153, 'recall': 0.8943203988727509, 'f1-score': 0.9141875796354773, 'support': 9226.0} | {'precision': 0.8768813224771774, 'recall': 0.8831276401888511, 'f1-score': 0.8799933971607792, 'support': 12073.0} | 0.8420 | {'precision': 0.7935930789933534, 'recall': 0.8030791162021215, 'f1-score': 0.7979825387399347, 'support': 27619.0} | {'precision': 0.8452856996263733, 'recall': 0.8420290379811, 'f1-score': 0.843406176946174, 'support': 27619.0} |
82
+
83
+
84
+ ### Framework versions
85
+
86
+ - Transformers 4.37.2
87
+ - Pytorch 2.2.0+cu121
88
+ - Datasets 2.17.0
89
+ - Tokenizers 0.15.2
meta_data/meta_s42_e10_cvi0.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"Claim": {"precision": 0.5968511450381679, "recall": 0.5870483341154388, "f1-score": 0.5919091554293825, "support": 4262.0}, "MajorClaim": {"precision": 0.7638358778625954, "recall": 0.7394919168591224, "f1-score": 0.7514667918329031, "support": 2165.0}, "O": {"precision": 0.916754478398314, "recall": 0.8816376165383056, "f1-score": 0.8988531873127389, "support": 9868.0}, "Premise": {"precision": 0.8754794924756565, "recall": 0.9101924994248025, "f1-score": 0.892498589960519, "support": 13039.0}, "accuracy": 0.8410377036885526, "macro avg": {"precision": 0.7882302484436835, "recall": 0.7795925917344173, "f1-score": 0.7836819311338858, "support": 29334.0}, "weighted avg": {"precision": 0.8406420723716451, "recall": 0.8410377036885526, "f1-score": 0.8405541280308031, "support": 29334.0}}
meta_data/meta_s42_e10_cvi1.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"Claim": {"precision": 0.6026968456537443, "recall": 0.5748736793752871, "f1-score": 0.5884565651816152, "support": 4354.0}, "MajorClaim": {"precision": 0.7435328898743533, "recall": 0.8565346956151554, "f1-score": 0.7960435212660733, "support": 2349.0}, "O": {"precision": 0.9216879489225858, "recall": 0.9075638506876228, "f1-score": 0.9145713720055435, "support": 10180.0}, "Premise": {"precision": 0.8886645730521908, "recall": 0.8886645730521908, "f1-score": 0.8886645730521908, "support": 13374.0}, "accuracy": 0.8473741613510923, "macro avg": {"precision": 0.7891455643757186, "recall": 0.806909199682564, "f1-score": 0.7969340078763557, "support": 30257.0}, "weighted avg": {"precision": 0.8473571122161213, "recall": 0.8473741613510923, "f1-score": 0.846990206671884, "support": 30257.0}}
meta_data/meta_s42_e10_cvi2.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"Claim": {"precision": 0.5921993056973658, "recall": 0.6363835856923414, "f1-score": 0.6134969325153374, "support": 4557.0}, "MajorClaim": {"precision": 0.8296101620674551, "recall": 0.8347289554869987, "f1-score": 0.8321616871704746, "support": 2269.0}, "O": {"precision": 0.9084591421040019, "recall": 0.8939983492512675, "f1-score": 0.9011707375052, "support": 8481.0}, "Premise": {"precision": 0.8887879846315054, "recall": 0.8753956240539424, "f1-score": 0.8820409719574336, "support": 14534.0}, "accuracy": 0.8410911162494554, "macro avg": {"precision": 0.804764148625082, "recall": 0.8101266286211376, "f1-score": 0.8072175822871114, "support": 29841.0}, "weighted avg": {"precision": 0.8445871199561774, "recall": 0.8410911162494554, "f1-score": 0.8426759458755786, "support": 29841.0}}
meta_data/meta_s42_e10_cvi3.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"Claim": {"precision": 0.6673760374547776, "recall": 0.6348178137651822, "f1-score": 0.6506899055918663, "support": 4940.0}, "MajorClaim": {"precision": 0.8167539267015707, "recall": 0.8555758683729433, "f1-score": 0.8357142857142857, "support": 2188.0}, "O": {"precision": 0.9148169781931464, "recall": 0.8972596199751742, "f1-score": 0.9059532417449987, "support": 10473.0}, "Premise": {"precision": 0.8788569316992055, "recall": 0.8975407258318133, "f1-score": 0.888100572566592, "support": 15899.0}, "accuracy": 0.8559701492537314, "macro avg": {"precision": 0.8194509685121751, "recall": 0.8212985069862784, "f1-score": 0.8201145014044358, "support": 33500.0}, "weighted avg": {"precision": 0.8548573070552874, "recall": 0.8559701492537314, "f1-score": 0.8552510535760686, "support": 33500.0}}
meta_data/meta_s42_e10_cvi4.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"Claim": {"precision": 0.5976262508727019, "recall": 0.6161228406909789, "f1-score": 0.6067336089781453, "support": 4168.0}, "MajorClaim": {"precision": 0.7766384306732055, "recall": 0.8094795539033457, "f1-score": 0.7927189988623435, "support": 2152.0}, "O": {"precision": 0.9332209106239461, "recall": 0.8997398655972252, "f1-score": 0.9161746040505491, "support": 9226.0}, "Premise": {"precision": 0.8752462245567958, "recall": 0.8832932990971589, "f1-score": 0.8792513501257369, "support": 12073.0}, "accuracy": 0.8427169702016728, "macro avg": {"precision": 0.7956829541816623, "recall": 0.8021588898221772, "f1-score": 0.7987196405041936, "support": 27619.0}, "weighted avg": {"precision": 0.8450333432396857, "recall": 0.8427169702016728, "f1-score": 0.8437172024624736, "support": 27619.0}}
meta_data/meta_s42_e11_cvi0.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"Claim": {"precision": 0.5647847717932079, "recall": 0.6126231816048804, "f1-score": 0.5877321328081037, "support": 4262.0}, "MajorClaim": {"precision": 0.7743341404358354, "recall": 0.7385681293302541, "f1-score": 0.7560283687943262, "support": 2165.0}, "O": {"precision": 0.9127931323283082, "recall": 0.8835630320226996, "f1-score": 0.8979402677651904, "support": 9868.0}, "Premise": {"precision": 0.8823888804032382, "recall": 0.8861108980750058, "f1-score": 0.8842459725251598, "support": 13039.0}, "accuracy": 0.8346287584373082, "macro avg": {"precision": 0.7835752312401474, "recall": 0.78021631025821, "f1-score": 0.781486685473195, "support": 29334.0}, "weighted avg": {"precision": 0.8384965348339742, "recall": 0.8346287584373082, "f1-score": 0.8363085009385121, "support": 29334.0}}
meta_data/meta_s42_e11_cvi1.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"Claim": {"precision": 0.6070287539936102, "recall": 0.6109324758842444, "f1-score": 0.6089743589743589, "support": 4354.0}, "MajorClaim": {"precision": 0.7809119010819165, "recall": 0.8603661132396765, "f1-score": 0.8187158193234758, "support": 2349.0}, "O": {"precision": 0.9231456657730116, "recall": 0.9132612966601179, "f1-score": 0.9181768801540665, "support": 10180.0}, "Premise": {"precision": 0.8930084745762712, "recall": 0.8824585015702109, "f1-score": 0.8877021436630312, "support": 13374.0}, "accuracy": 0.852034240010576, "macro avg": {"precision": 0.8010236988562024, "recall": 0.8167545968385623, "f1-score": 0.8083923005287331, "support": 30257.0}, "weighted avg": {"precision": 0.8532929063384311, "recall": 0.852034240010576, "f1-score": 0.8524905617834876, "support": 30257.0}}
meta_data/meta_s42_e11_cvi2.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"Claim": {"precision": 0.6089608750525873, "recall": 0.6352863726135616, "f1-score": 0.6218451294168188, "support": 4557.0}, "MajorClaim": {"precision": 0.8171052631578948, "recall": 0.8210665491405905, "f1-score": 0.8190811167289516, "support": 2269.0}, "O": {"precision": 0.9043169722057954, "recall": 0.9015446291710884, "f1-score": 0.9029286726499764, "support": 8481.0}, "Premise": {"precision": 0.889283723522854, "recall": 0.8781477913857163, "f1-score": 0.8836806757598838, "support": 14534.0}, "accuracy": 0.8433698602593747, "macro avg": {"precision": 0.8049167084847829, "recall": 0.8090113355777392, "f1-score": 0.8068838986389077, "support": 29841.0}, "weighted avg": {"precision": 0.8452601598029025, "recall": 0.8433698602593747, "f1-score": 0.8442544258854943, "support": 29841.0}}
meta_data/meta_s42_e11_cvi3.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"Claim": {"precision": 0.6589018302828619, "recall": 0.6412955465587045, "f1-score": 0.6499794829708658, "support": 4940.0}, "MajorClaim": {"precision": 0.8112398112398113, "recall": 0.8642595978062158, "f1-score": 0.8369108209780924, "support": 2188.0}, "O": {"precision": 0.9196393571148569, "recall": 0.8960183328559153, "f1-score": 0.9076751946607342, "support": 10473.0}, "Premise": {"precision": 0.8803614532400817, "recall": 0.8946474621045348, "f1-score": 0.8874469678063388, "support": 15899.0}, "accuracy": 0.8557313432835821, "macro avg": {"precision": 0.8175356129694029, "recall": 0.8240552348313426, "f1-score": 0.8205031166040078, "support": 33500.0}, "weighted avg": {"precision": 0.8554691785288956, "recall": 0.8557313432835821, "f1-score": 0.8554525724480894, "support": 33500.0}}
meta_data/meta_s42_e11_cvi4.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"Claim": {"precision": 0.6066587395957194, "recall": 0.6120441458733206, "f1-score": 0.609339543771647, "support": 4168.0}, "MajorClaim": {"precision": 0.7760070827799912, "recall": 0.8145910780669146, "f1-score": 0.7948311040580367, "support": 2152.0}, "O": {"precision": 0.9332207207207207, "recall": 0.8982224149143724, "f1-score": 0.9153871644758642, "support": 9226.0}, "Premise": {"precision": 0.8730753564154786, "recall": 0.8876832601673155, "f1-score": 0.8803187120091999, "support": 12073.0}, "accuracy": 0.8439117998479307, "macro avg": {"precision": 0.7972404748779774, "recall": 0.8031352247554808, "f1-score": 0.799969131078687, "support": 27619.0}, "weighted avg": {"precision": 0.8453982409265701, "recall": 0.8439117998479307, "f1-score": 0.8444785670702963, "support": 27619.0}}
meta_data/meta_s42_e12_cvi0.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"Claim": {"precision": 0.5947126436781609, "recall": 0.6069920225246364, "f1-score": 0.60078959591268, "support": 4262.0}, "MajorClaim": {"precision": 0.7675083373034779, "recall": 0.7441108545034643, "f1-score": 0.7556285178236397, "support": 2165.0}, "O": {"precision": 0.9155476290390264, "recall": 0.8843737332792866, "f1-score": 0.8996907216494845, "support": 9868.0}, "Premise": {"precision": 0.8824983149853965, "recall": 0.9037502875987422, "f1-score": 0.8929978781448925, "support": 13039.0}, "accuracy": 0.8423331287925274, "macro avg": {"precision": 0.7900667312515154, "recall": 0.7848067244765323, "f1-score": 0.7872766783826742, "support": 29334.0}, "weighted avg": {"precision": 0.8433163008819133, "recall": 0.8423331287925274, "f1-score": 0.8426552251052154, "support": 29334.0}}
meta_data/meta_s42_e12_cvi1.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"Claim": {"precision": 0.5889487870619946, "recall": 0.602204869085898, "f1-score": 0.5955030660913013, "support": 4354.0}, "MajorClaim": {"precision": 0.7601390498261877, "recall": 0.8378033205619413, "f1-score": 0.7970838396111787, "support": 2349.0}, "O": {"precision": 0.9227710963124938, "recall": 0.9119842829076621, "f1-score": 0.917345980929796, "support": 10180.0}, "Premise": {"precision": 0.893652603572786, "recall": 0.8790189920741738, "f1-score": 0.8862753967356479, "support": 13374.0}, "accuracy": 0.8470767095217635, "macro avg": {"precision": 0.7913778841933654, "recall": 0.8077528661574188, "f1-score": 0.7990520708419809, "support": 30257.0}, "weighted avg": {"precision": 0.8492371790842868, "recall": 0.8470767095217635, "f1-score": 0.8479624394624736, "support": 30257.0}}
meta_data/meta_s42_e12_cvi2.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"Claim": {"precision": 0.6091735362502643, "recall": 0.6324336186087338, "f1-score": 0.6205857019810508, "support": 4557.0}, "MajorClaim": {"precision": 0.8285457809694794, "recall": 0.8135742617893346, "f1-score": 0.8209917722926396, "support": 2269.0}, "O": {"precision": 0.9031606463878327, "recall": 0.8962386511024644, "f1-score": 0.8996863348523407, "support": 8481.0}, "Premise": {"precision": 0.8883589105488732, "recall": 0.8842025595156185, "f1-score": 0.8862758620689655, "support": 14534.0}, "accuracy": 0.8438055024965652, "macro avg": {"precision": 0.8073097185391125, "recall": 0.8066122727540378, "f1-score": 0.8068849177987492, "support": 29841.0}, "weighted avg": {"precision": 0.8453834666949746, "recall": 0.8438055024965652, "f1-score": 0.8445498663065144, "support": 29841.0}}
meta_data/meta_s42_e12_cvi3.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"Claim": {"precision": 0.6668821374703727, "recall": 0.6265182186234818, "f1-score": 0.6460703475628848, "support": 4940.0}, "MajorClaim": {"precision": 0.8047874535699546, "recall": 0.8912248628884827, "f1-score": 0.8458035133376708, "support": 2188.0}, "O": {"precision": 0.9271307099070595, "recall": 0.8953499474840065, "f1-score": 0.9109632292223246, "support": 10473.0}, "Premise": {"precision": 0.8759343217742923, "recall": 0.8992389458456507, "f1-score": 0.887433661276807, "support": 15899.0}, "accuracy": 0.8572835820895522, "macro avg": {"precision": 0.8186836556804198, "recall": 0.8280829937104054, "f1-score": 0.822567687849922, "support": 33500.0}, "weighted avg": {"precision": 0.8564654452018152, "recall": 0.8572835820895522, "f1-score": 0.8564785458038459, "support": 33500.0}}
meta_data/meta_s42_e12_cvi4.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"Claim": {"precision": 0.6009334889148191, "recall": 0.6178023032629558, "f1-score": 0.6092511534366496, "support": 4168.0}, "MajorClaim": {"precision": 0.7783711615487316, "recall": 0.8127323420074349, "f1-score": 0.7951807228915663, "support": 2152.0}, "O": {"precision": 0.9334009465855307, "recall": 0.8977888575764145, "f1-score": 0.9152486187845303, "support": 9226.0}, "Premise": {"precision": 0.8738229755178908, "recall": 0.8839559347303901, "f1-score": 0.8788602487029565, "support": 12073.0}, "accuracy": 0.8428617980375829, "macro avg": {"precision": 0.7966321431417431, "recall": 0.8030698593942989, "f1-score": 0.7996351859539257, "support": 27619.0}, "weighted avg": {"precision": 0.8451054505259219, "recall": 0.8428617980375829, "f1-score": 0.8438086557327736, "support": 27619.0}}
meta_data/meta_s42_e13_cvi0.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"Claim": {"precision": 0.5870740305522915, "recall": 0.5861098076020648, "f1-score": 0.5865915228366796, "support": 4262.0}, "MajorClaim": {"precision": 0.7950778503264692, "recall": 0.7311778290993072, "f1-score": 0.7617901828681425, "support": 2165.0}, "O": {"precision": 0.9098112814096548, "recall": 0.8842723956222133, "f1-score": 0.8968600647515289, "support": 9868.0}, "Premise": {"precision": 0.8748610802400534, "recall": 0.9055909195490451, "f1-score": 0.8899608079589992, "support": 13039.0}, "accuracy": 0.8391286561669054, "macro avg": {"precision": 0.7917060606321172, "recall": 0.7767877379681576, "f1-score": 0.7838006446038375, "support": 29334.0}, "weighted avg": {"precision": 0.8389167660179723, "recall": 0.8391286561669054, "f1-score": 0.8387449004631122, "support": 29334.0}}
meta_data/meta_s42_e13_cvi1.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"Claim": {"precision": 0.6076249112005683, "recall": 0.5893431327514929, "f1-score": 0.5983444094671797, "support": 4354.0}, "MajorClaim": {"precision": 0.7571984435797665, "recall": 0.8284376330353341, "f1-score": 0.7912177271803211, "support": 2349.0}, "O": {"precision": 0.9232534930139721, "recall": 0.9087426326129666, "f1-score": 0.9159405940594059, "support": 10180.0}, "Premise": {"precision": 0.8907319250223148, "recall": 0.895394048153133, "f1-score": 0.8930569020806921, "support": 13374.0}, "accuracy": 0.8506461314737086, "macro avg": {"precision": 0.7947021932041555, "recall": 0.8054793616382316, "f1-score": 0.7996399081968997, "support": 30257.0}, "weighted avg": {"precision": 0.8505677142964214, "recall": 0.8506461314737086, "f1-score": 0.8504405676676009, "support": 30257.0}}
meta_data/meta_s42_e13_cvi2.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"Claim": {"precision": 0.6135578583765112, "recall": 0.6236559139784946, "f1-score": 0.6185656763521602, "support": 4557.0}, "MajorClaim": {"precision": 0.8212860310421286, "recall": 0.8162185985015425, "f1-score": 0.8187444739168876, "support": 2269.0}, "O": {"precision": 0.9046037019458947, "recall": 0.89895059544865, "f1-score": 0.901768289076823, "support": 8481.0}, "Premise": {"precision": 0.8898526779567671, "recall": 0.8893628732626944, "f1-score": 0.8896077081899518, "support": 14534.0}, "accuracy": 0.845950202741195, "macro avg": {"precision": 0.8073250673303254, "recall": 0.8070469952978454, "f1-score": 0.8071715368839557, "support": 29841.0}, "weighted avg": {"precision": 0.8466386509394169, "recall": 0.845950202741195, "f1-score": 0.8462849867279082, "support": 29841.0}}
meta_data/meta_s42_e13_cvi3.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"Claim": {"precision": 0.6582512315270936, "recall": 0.6491902834008098, "f1-score": 0.653689359967387, "support": 4940.0}, "MajorClaim": {"precision": 0.8046261875258158, "recall": 0.8903107861060329, "f1-score": 0.8453026686916901, "support": 2188.0}, "O": {"precision": 0.92805470220989, "recall": 0.8942041439893058, "f1-score": 0.9108150165337483, "support": 10473.0}, "Premise": {"precision": 0.8815462894018367, "recall": 0.8935782124661928, "f1-score": 0.8875214743089176, "support": 15899.0}, "accuracy": 0.8575223880597015, "macro avg": {"precision": 0.818119602666159, "recall": 0.8318208564905853, "f1-score": 0.8243321298754357, "support": 33500.0}, "weighted avg": {"precision": 0.8581344636863972, "recall": 0.8575223880597015, "f1-score": 0.8575646944934847, "support": 33500.0}}
meta_data/meta_s42_e13_cvi4.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"Claim": {"precision": 0.6002427184466019, "recall": 0.5933301343570058, "f1-score": 0.5967664092664092, "support": 4168.0}, "MajorClaim": {"precision": 0.7753721244925575, "recall": 0.7987918215613383, "f1-score": 0.7869077592126345, "support": 2152.0}, "O": {"precision": 0.9312121891104638, "recall": 0.9009321482766096, "f1-score": 0.9158219479947113, "support": 9226.0}, "Premise": {"precision": 0.8693752023308514, "recall": 0.8897539965211629, "f1-score": 0.8794465594170862, "support": 12073.0}, "accuracy": 0.8416669683913248, "macro avg": {"precision": 0.7940505585951186, "recall": 0.7957020251790292, "f1-score": 0.7947356689727103, "support": 27619.0}, "weighted avg": {"precision": 0.8420921444247412, "recall": 0.8416669683913248, "f1-score": 0.8417277778228636, "support": 27619.0}}
meta_data/meta_s42_e14_cvi0.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"Claim": {"precision": 0.5852920642308588, "recall": 0.5900985452839043, "f1-score": 0.5876854772753826, "support": 4262.0}, "MajorClaim": {"precision": 0.7803784860557769, "recall": 0.7237875288683603, "f1-score": 0.7510184519530313, "support": 2165.0}, "O": {"precision": 0.9105485232067511, "recall": 0.8747466558573166, "f1-score": 0.8922886086417201, "support": 9868.0}, "Premise": {"precision": 0.8734961989814747, "recall": 0.907661630493136, "f1-score": 0.8902512411614263, "support": 13039.0}, "accuracy": 0.8368787073021068, "macro avg": {"precision": 0.7874288181187153, "recall": 0.7740735901256792, "f1-score": 0.78031094475789, "support": 29334.0}, "weighted avg": {"precision": 0.8372142894111387, "recall": 0.8368787073021068, "f1-score": 0.8367000878232196, "support": 29334.0}}
meta_data/meta_s42_e14_cvi1.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"Claim": {"precision": 0.6126810733792448, "recall": 0.5925585668350941, "f1-score": 0.6024518388791592, "support": 4354.0}, "MajorClaim": {"precision": 0.7490196078431373, "recall": 0.8131119625372499, "f1-score": 0.7797509695856297, "support": 2349.0}, "O": {"precision": 0.9198922585794094, "recall": 0.9057956777996071, "f1-score": 0.9127895466244308, "support": 10180.0}, "Premise": {"precision": 0.8874703087885986, "recall": 0.8939733811873785, "f1-score": 0.8907099754153318, "support": 13374.0}, "accuracy": 0.8482995670423373, "macro avg": {"precision": 0.7922658121475975, "recall": 0.8013598970898324, "f1-score": 0.7964255826261378, "support": 30257.0}, "weighted avg": {"precision": 0.8480877666124819, "recall": 0.8482995670423373, "f1-score": 0.8480438619122139, "support": 30257.0}}
meta_data/meta_s42_e14_cvi2.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"Claim": {"precision": 0.595850622406639, "recall": 0.630239192451174, "f1-score": 0.6125626533006292, "support": 4557.0}, "MajorClaim": {"precision": 0.8051391862955032, "recall": 0.8285588364918466, "f1-score": 0.8166811468288445, "support": 2269.0}, "O": {"precision": 0.908732057416268, "recall": 0.8957670086074755, "f1-score": 0.9022029570690576, "support": 8481.0}, "Premise": {"precision": 0.890827865419517, "recall": 0.878078987202422, "f1-score": 0.8844074844074844, "support": 14534.0}, "accuracy": 0.8414932475453235, "macro avg": {"precision": 0.8001374328844817, "recall": 0.8081610061882295, "f1-score": 0.803963560401504, "support": 29841.0}, "weighted avg": {"precision": 0.8443551112551918, "recall": 0.8414932475453235, "f1-score": 0.8428021577871609, "support": 29841.0}}
meta_data/meta_s42_e14_cvi3.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"Claim": {"precision": 0.6615808433990252, "recall": 0.6319838056680162, "f1-score": 0.6464437312351174, "support": 4940.0}, "MajorClaim": {"precision": 0.8065739570164349, "recall": 0.8747714808043876, "f1-score": 0.8392896294672221, "support": 2188.0}, "O": {"precision": 0.925365757216291, "recall": 0.8938222094910723, "f1-score": 0.9093205109524503, "support": 10473.0}, "Premise": {"precision": 0.8768107046403143, "recall": 0.8984841813950563, "f1-score": 0.8875151439843435, "support": 15899.0}, "accuracy": 0.8561791044776119, "macro avg": {"precision": 0.8175828155680164, "recall": 0.824765419339633, "f1-score": 0.8206422539097834, "support": 33500.0}, "weighted avg": {"precision": 0.8556645418730064, "recall": 0.8561791044776119, "f1-score": 0.855633275432473, "support": 33500.0}}
meta_data/meta_s42_e14_cvi4.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"Claim": {"precision": 0.6048215551878988, "recall": 0.6139635316698656, "f1-score": 0.6093582569353494, "support": 4168.0}, "MajorClaim": {"precision": 0.7849462365591398, "recall": 0.8141263940520446, "f1-score": 0.7992700729927007, "support": 2152.0}, "O": {"precision": 0.931049822064057, "recall": 0.9074355083459787, "f1-score": 0.9190910088923043, "support": 9226.0}, "Premise": {"precision": 0.8758632028937849, "recall": 0.8824650045556199, "f1-score": 0.8791517101951561, "support": 12073.0}, "accuracy": 0.8449618016582787, "macro avg": {"precision": 0.7991702041762201, "recall": 0.8044976096558771, "f1-score": 0.8017177622538776, "support": 27619.0}, "weighted avg": {"precision": 0.8463109688981529, "recall": 0.8449618016582787, "f1-score": 0.8455543885446015, "support": 27619.0}}
meta_data/meta_s42_e15_cvi0.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"Claim": {"precision": 0.5817039106145251, "recall": 0.5863444392304082, "f1-score": 0.5840149567655994, "support": 4262.0}, "MajorClaim": {"precision": 0.7930186823992134, "recall": 0.7450346420323326, "f1-score": 0.7682781614670159, "support": 2165.0}, "O": {"precision": 0.9127108875615635, "recall": 0.8826509931090393, "f1-score": 0.897429292669105, "support": 9868.0}, "Premise": {"precision": 0.8731892132828171, "recall": 0.9014494976608636, "f1-score": 0.8870943396226415, "support": 13039.0}, "accuracy": 0.8377991409286153, "macro avg": {"precision": 0.7901556734645298, "recall": 0.7788698930081609, "f1-score": 0.7842041876310903, "support": 29334.0}, "weighted avg": {"precision": 0.8382168372838877, "recall": 0.8377991409286153, "f1-score": 0.8377667321098188, "support": 29334.0}}
meta_data/meta_s42_e15_cvi1.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"Claim": {"precision": 0.6026920399164539, "recall": 0.5964630225080386, "f1-score": 0.5995613528800647, "support": 4354.0}, "MajorClaim": {"precision": 0.7556840077071291, "recall": 0.8348233290762026, "f1-score": 0.7932847896440129, "support": 2349.0}, "O": {"precision": 0.919852648347272, "recall": 0.9075638506876228, "f1-score": 0.9136669303797469, "support": 10180.0}, "Premise": {"precision": 0.893004733638891, "recall": 0.8886645730521908, "f1-score": 0.8908293670127048, "support": 13374.0}, "accuracy": 0.8487953200912186, "macro avg": {"precision": 0.7928083574024365, "recall": 0.8068786938310137, "f1-score": 0.7993356099791323, "support": 30257.0}, "weighted avg": {"precision": 0.8496006921955925, "recall": 0.8487953200912186, "f1-score": 0.8490265858150949, "support": 30257.0}}
meta_data/meta_s42_e15_cvi2.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"Claim": {"precision": 0.5988150655945832, "recall": 0.6210226025894229, "f1-score": 0.6097166864160294, "support": 4557.0}, "MajorClaim": {"precision": 0.817984361424848, "recall": 0.8298810048479507, "f1-score": 0.8238897396630934, "support": 2269.0}, "O": {"precision": 0.9015312131919906, "recall": 0.9024879141610659, "f1-score": 0.9020093099994109, "support": 8481.0}, "Premise": {"precision": 0.8912937233819731, "recall": 0.8783542039355993, "f1-score": 0.8847766573101847, "support": 14534.0}, "accuracy": 0.842230488254415, "macro avg": {"precision": 0.8024060908983487, "recall": 0.8079364313835097, "f1-score": 0.8050980983471796, "support": 29841.0}, "weighted avg": {"precision": 0.8439648793506372, "recall": 0.842230488254415, "f1-score": 0.8430404361363437, "support": 29841.0}}
meta_data/meta_s42_e15_cvi3.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"Claim": {"precision": 0.6558577405857741, "recall": 0.6346153846153846, "f1-score": 0.6450617283950616, "support": 4940.0}, "MajorClaim": {"precision": 0.8072805139186295, "recall": 0.8615173674588665, "f1-score": 0.833517576829538, "support": 2188.0}, "O": {"precision": 0.9255487443148112, "recall": 0.8938222094910723, "f1-score": 0.9094088502453005, "support": 10473.0}, "Premise": {"precision": 0.8767746297092988, "recall": 0.8972891376816152, "f1-score": 0.8869132732359342, "support": 15899.0}, "accuracy": 0.8551343283582089, "macro avg": {"precision": 0.8163654071321284, "recall": 0.8218110248117346, "f1-score": 0.8187253571764586, "support": 33500.0}, "weighted avg": {"precision": 0.8549068310419357, "recall": 0.8551343283582089, "f1-score": 0.8547944601842323, "support": 33500.0}}
meta_data/meta_s42_e15_cvi4.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"Claim": {"precision": 0.6039084842707341, "recall": 0.6079654510556622, "f1-score": 0.6059301769488283, "support": 4168.0}, "MajorClaim": {"precision": 0.7791218637992832, "recall": 0.8080855018587361, "f1-score": 0.7933394160583942, "support": 2152.0}, "O": {"precision": 0.936604624929498, "recall": 0.8999566442662043, "f1-score": 0.9179149853518324, "support": 9226.0}, "Premise": {"precision": 0.8701119584617881, "recall": 0.8883458958005467, "f1-score": 0.8791343907537195, "support": 12073.0}, "accuracy": 0.8436583511350881, "macro avg": {"precision": 0.7974367328653259, "recall": 0.8010883732452874, "f1-score": 0.7990797422781937, "support": 27619.0}, "weighted avg": {"precision": 0.8450608913228282, "recall": 0.8436583511350881, "f1-score": 0.8441745376482147, "support": 27619.0}}
meta_data/meta_s42_e16_cvi0.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"Claim": {"precision": 0.5606635071090047, "recall": 0.5551384326607227, "f1-score": 0.5578872907333177, "support": 4262.0}, "MajorClaim": {"precision": 0.748995983935743, "recall": 0.6891454965357968, "f1-score": 0.7178253548231899, "support": 2165.0}, "O": {"precision": 0.9082793070464449, "recall": 0.8660316173490069, "f1-score": 0.8866524874202418, "support": 9868.0}, "Premise": {"precision": 0.8605702617953767, "recall": 0.9050540685635402, "f1-score": 0.8822517942583731, "support": 13039.0}, "accuracy": 0.8251517010977023, "macro avg": {"precision": 0.7696272649716422, "recall": 0.7538424037772666, "f1-score": 0.7611542318087806, "support": 29334.0}, "weighted avg": {"precision": 0.8248108003682995, "recall": 0.8251517010977023, "f1-score": 0.8244690603905188, "support": 29334.0}}
meta_data/meta_s42_e4_cvi0.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"Claim": {"precision": 0.5270985631459542, "recall": 0.49061473486625995, "f1-score": 0.5082026977761575, "support": 4262.0}, "MajorClaim": {"precision": 0.6919964028776978, "recall": 0.7108545034642032, "f1-score": 0.7012987012987012, "support": 2165.0}, "O": {"precision": 0.9103683202043857, "recall": 0.866639643291447, "f1-score": 0.8879659433080678, "support": 9868.0}, "Premise": {"precision": 0.8532984217033966, "recall": 0.8997622517064192, "f1-score": 0.8759145886217709, "support": 13039.0}, "accuracy": 0.8152314720119997, "macro avg": {"precision": 0.7456904269828586, "recall": 0.7419677833320824, "f1-score": 0.7433454827511743, "support": 29334.0}, "weighted avg": {"precision": 0.8131976202606442, "recall": 0.8152314720119997, "f1-score": 0.813655479506271, "support": 29334.0}}
meta_data/meta_s42_e4_cvi1.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"Claim": {"precision": 0.5628544423440454, "recall": 0.5470831419384474, "f1-score": 0.5548567435359889, "support": 4354.0}, "MajorClaim": {"precision": 0.672202166064982, "recall": 0.7926777352064709, "f1-score": 0.7274858370775543, "support": 2349.0}, "O": {"precision": 0.9228269699431356, "recall": 0.8927308447937131, "f1-score": 0.907529458757739, "support": 10180.0}, "Premise": {"precision": 0.8858059222794062, "recall": 0.8879916255420967, "f1-score": 0.8868974272805348, "support": 13374.0}, "accuracy": 0.833129523746571, "macro avg": {"precision": 0.7609223751578923, "recall": 0.7801208368701821, "f1-score": 0.7691923666629543, "support": 30257.0}, "weighted avg": {"precision": 0.8352056743444002, "recall": 0.833129523746571, "f1-score": 0.8336823404585559, "support": 30257.0}}
meta_data/meta_s42_e4_cvi2.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"Claim": {"precision": 0.6082667919210897, "recall": 0.5683563748079877, "f1-score": 0.5876347135564379, "support": 4557.0}, "MajorClaim": {"precision": 0.7201612903225807, "recall": 0.7871308946672543, "f1-score": 0.7521583491261318, "support": 2269.0}, "O": {"precision": 0.8928226196230062, "recall": 0.871241598868058, "f1-score": 0.8819001014501402, "support": 8481.0}, "Premise": {"precision": 0.8750252916975787, "recall": 0.8926654740608229, "f1-score": 0.8837573652123566, "support": 14534.0}, "accuracy": 0.8290271773734125, "macro avg": {"precision": 0.7740689983910638, "recall": 0.7798485856010307, "f1-score": 0.7763626323362666, "support": 29841.0}, "weighted avg": {"precision": 0.827571594955989, "recall": 0.8290271773734125, "f1-score": 0.8280025129934959, "support": 29841.0}}
meta_data/meta_s42_e4_cvi3.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"Claim": {"precision": 0.5995675675675676, "recall": 0.5613360323886639, "f1-score": 0.5798222686879246, "support": 4940.0}, "MajorClaim": {"precision": 0.7202194357366771, "recall": 0.840036563071298, "f1-score": 0.7755274261603375, "support": 2188.0}, "O": {"precision": 0.913538674853917, "recall": 0.8807409529265731, "f1-score": 0.8968400583373844, "support": 10473.0}, "Premise": {"precision": 0.8654628374214224, "recall": 0.8832630983080697, "f1-score": 0.8742723735408561, "support": 15899.0}, "accuracy": 0.8321791044776119, "macro avg": {"precision": 0.7746971288948961, "recall": 0.7913441616736512, "f1-score": 0.7816155316816257, "support": 33500.0}, "weighted avg": {"precision": 0.8317966597935492, "recall": 0.8321791044776119, "f1-score": 0.8314578630940497, "support": 33500.0}}
meta_data/meta_s42_e4_cvi4.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"Claim": {"precision": 0.5835557928457021, "recall": 0.52447216890595, "f1-score": 0.5524387161991408, "support": 4168.0}, "MajorClaim": {"precision": 0.6944444444444444, "recall": 0.824814126394052, "f1-score": 0.754035683942226, "support": 2152.0}, "O": {"precision": 0.934596507248031, "recall": 0.8874918707999133, "f1-score": 0.9104353143937287, "support": 9226.0}, "Premise": {"precision": 0.8580758203249442, "recall": 0.8924045390540877, "f1-score": 0.8749035689634171, "support": 12073.0}, "accuracy": 0.8299721206415873, "macro avg": {"precision": 0.7676681412157805, "recall": 0.7822956762885007, "f1-score": 0.7729533208746281, "support": 27619.0}, "weighted avg": {"precision": 0.8294594932357695, "recall": 0.8299721206415873, "f1-score": 0.8286917107662684, "support": 27619.0}}
meta_data/meta_s42_e5_cvi0.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"Claim": {"precision": 0.5373616236162362, "recall": 0.5466916940403567, "f1-score": 0.5419865084903465, "support": 4262.0}, "MajorClaim": {"precision": 0.739356669820246, "recall": 0.7219399538106236, "f1-score": 0.7305445197476046, "support": 2165.0}, "O": {"precision": 0.9105509466071049, "recall": 0.8675516822051074, "f1-score": 0.8885313959522575, "support": 9868.0}, "Premise": {"precision": 0.8653018839934727, "recall": 0.8947005138430861, "f1-score": 0.8797556653218205, "support": 13039.0}, "accuracy": 0.8222540396809164, "macro avg": {"precision": 0.7631427810092649, "recall": 0.7577209609747935, "f1-score": 0.7602045223780073, "support": 29334.0}, "weighted avg": {"precision": 0.8235811834909332, "recall": 0.8222540396809164, "f1-score": 0.8226200763560209, "support": 29334.0}}