Theoreticallyhugo commited on
Commit
9b492fa
·
1 Parent(s): 11f8fd6

trainer: training complete at 2023-11-14 14:08:14.362464.

Browse files
Files changed (1) hide show
  1. README.md +12 -13
README.md CHANGED
@@ -17,13 +17,13 @@ should probably proofread and complete it, then remove this comment. -->
17
 
18
  This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
19
  It achieves the following results on the evaluation set:
20
- - Loss: 0.7163
21
- - Claim: {'precision': 0.41496598639455784, 'recall': 0.4236111111111111, 'f1-score': 0.41924398625429554, 'support': 144.0}
22
- - Majorclaim: {'precision': 0.6216216216216216, 'recall': 0.6388888888888888, 'f1-score': 0.6301369863013699, 'support': 72.0}
23
- - Premise: {'precision': 0.8118556701030928, 'recall': 0.8015267175572519, 'f1-score': 0.8066581306017925, 'support': 393.0}
24
- - Accuracy: 0.6929
25
- - Macro avg: {'precision': 0.6161477593730907, 'recall': 0.6213422391857506, 'f1-score': 0.618679701052486, 'support': 609.0}
26
- - Weighted avg: {'precision': 0.695519108617551, 'recall': 0.6929392446633826, 'f1-score': 0.6941833207895264, 'support': 609.0}
27
 
28
  ## Model description
29
 
@@ -48,15 +48,14 @@ The following hyperparameters were used during training:
48
  - seed: 42
49
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
50
  - lr_scheduler_type: linear
51
- - num_epochs: 3
52
 
53
  ### Training results
54
 
55
- | Training Loss | Epoch | Step | Validation Loss | Claim | Majorclaim | Premise | Accuracy | Macro avg | Weighted avg |
56
- |:-------------:|:-----:|:----:|:---------------:|:-------------------------------------------------------------------------------------------------------------------:|:----------------------------------------------------------------------------------------------------------------:|:-----------------------------------------------------------------------------------------------------------------:|:--------:|:-----------------------------------------------------------------------------------------------------------------:|:-----------------------------------------------------------------------------------------------------------------:|
57
- | No log | 1.0 | 267 | 0.7071 | {'precision': 0.4430379746835443, 'recall': 0.24305555555555555, 'f1-score': 0.31390134529147984, 'support': 144.0} | {'precision': 0.5692307692307692, 'recall': 0.5138888888888888, 'f1-score': 0.5401459854014599, 'support': 72.0} | {'precision': 0.7634408602150538, 'recall': 0.9033078880407125, 'f1-score': 0.8275058275058275, 'support': 393.0} | 0.7011 | {'precision': 0.5919032013764557, 'recall': 0.553417444161719, 'f1-score': 0.560517719399589, 'support': 609.0} | {'precision': 0.6647197730764564, 'recall': 0.7011494252873564, 'f1-score': 0.6720888257482239, 'support': 609.0} |
58
- | 0.7262 | 2.0 | 534 | 0.6601 | {'precision': 0.43312101910828027, 'recall': 0.4722222222222222, 'f1-score': 0.45182724252491696, 'support': 144.0} | {'precision': 0.6615384615384615, 'recall': 0.5972222222222222, 'f1-score': 0.6277372262773723, 'support': 72.0} | {'precision': 0.813953488372093, 'recall': 0.8015267175572519, 'f1-score': 0.8076923076923077, 'support': 393.0} | 0.6995 | {'precision': 0.6362043230062783, 'recall': 0.6236570540005655, 'f1-score': 0.6290855921648656, 'support': 609.0} | {'precision': 0.7058849210387425, 'recall': 0.6995073891625616, 'f1-score': 0.702271395958351, 'support': 609.0} |
59
- | 0.7262 | 3.0 | 801 | 0.7163 | {'precision': 0.41496598639455784, 'recall': 0.4236111111111111, 'f1-score': 0.41924398625429554, 'support': 144.0} | {'precision': 0.6216216216216216, 'recall': 0.6388888888888888, 'f1-score': 0.6301369863013699, 'support': 72.0} | {'precision': 0.8118556701030928, 'recall': 0.8015267175572519, 'f1-score': 0.8066581306017925, 'support': 393.0} | 0.6929 | {'precision': 0.6161477593730907, 'recall': 0.6213422391857506, 'f1-score': 0.618679701052486, 'support': 609.0} | {'precision': 0.695519108617551, 'recall': 0.6929392446633826, 'f1-score': 0.6941833207895264, 'support': 609.0} |
60
 
61
 
62
  ### Framework versions
 
17
 
18
  This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
19
  It achieves the following results on the evaluation set:
20
+ - Loss: 0.6725
21
+ - Claim: {'precision': 0.4435483870967742, 'recall': 0.3819444444444444, 'f1-score': 0.4104477611940298, 'support': 144.0}
22
+ - Majorclaim: {'precision': 0.6166666666666667, 'recall': 0.5138888888888888, 'f1-score': 0.5606060606060607, 'support': 72.0}
23
+ - Premise: {'precision': 0.7976470588235294, 'recall': 0.8625954198473282, 'f1-score': 0.8288508557457213, 'support': 393.0}
24
+ - Accuracy: 0.7077
25
+ - Macro avg: {'precision': 0.6192873708623234, 'recall': 0.5861429177268872, 'f1-score': 0.5999682258486039, 'support': 609.0}
26
+ - Weighted avg: {'precision': 0.6925225974705789, 'recall': 0.7077175697865353, 'f1-score': 0.6982044339632925, 'support': 609.0}
27
 
28
  ## Model description
29
 
 
48
  - seed: 42
49
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
50
  - lr_scheduler_type: linear
51
+ - num_epochs: 2
52
 
53
  ### Training results
54
 
55
+ | Training Loss | Epoch | Step | Validation Loss | Claim | Majorclaim | Premise | Accuracy | Macro avg | Weighted avg |
56
+ |:-------------:|:-----:|:----:|:---------------:|:------------------------------------------------------------------------------------------------------------------:|:----------------------------------------------------------------------------------------------------------------:|:-----------------------------------------------------------------------------------------------------------------:|:--------:|:-----------------------------------------------------------------------------------------------------------------:|:-----------------------------------------------------------------------------------------------------------------:|
57
+ | No log | 1.0 | 267 | 0.6970 | {'precision': 0.4307692307692308, 'recall': 0.19444444444444445, 'f1-score': 0.2679425837320574, 'support': 144.0} | {'precision': 0.5774647887323944, 'recall': 0.5694444444444444, 'f1-score': 0.5734265734265734, 'support': 72.0} | {'precision': 0.758985200845666, 'recall': 0.9134860050890585, 'f1-score': 0.8290993071593533, 'support': 393.0} | 0.7028 | {'precision': 0.589073073449097, 'recall': 0.5591249646593158, 'f1-score': 0.556822821439328, 'support': 609.0} | {'precision': 0.6599169424496689, 'recall': 0.7027914614121511, 'f1-score': 0.6661846848239006, 'support': 609.0} |
58
+ | 0.7281 | 2.0 | 534 | 0.6725 | {'precision': 0.4435483870967742, 'recall': 0.3819444444444444, 'f1-score': 0.4104477611940298, 'support': 144.0} | {'precision': 0.6166666666666667, 'recall': 0.5138888888888888, 'f1-score': 0.5606060606060607, 'support': 72.0} | {'precision': 0.7976470588235294, 'recall': 0.8625954198473282, 'f1-score': 0.8288508557457213, 'support': 393.0} | 0.7077 | {'precision': 0.6192873708623234, 'recall': 0.5861429177268872, 'f1-score': 0.5999682258486039, 'support': 609.0} | {'precision': 0.6925225974705789, 'recall': 0.7077175697865353, 'f1-score': 0.6982044339632925, 'support': 609.0} |
 
59
 
60
 
61
  ### Framework versions