|
--- |
|
license: apache-2.0 |
|
base_model: allenai/longformer-base-4096 |
|
tags: |
|
- generated_from_trainer |
|
datasets: |
|
- essays_su_g |
|
metrics: |
|
- accuracy |
|
model-index: |
|
- name: longformer-spans |
|
results: |
|
- task: |
|
name: Token Classification |
|
type: token-classification |
|
dataset: |
|
name: essays_su_g |
|
type: essays_su_g |
|
config: spans |
|
split: train[80%:100%] |
|
args: spans |
|
metrics: |
|
- name: Accuracy |
|
type: accuracy |
|
value: 0.9412361055794923 |
|
--- |
|
|
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You |
|
should probably proofread and complete it, then remove this comment. --> |
|
|
|
# longformer-spans |
|
|
|
This model is a fine-tuned version of [allenai/longformer-base-4096](https://huggingface.co/allenai/longformer-base-4096) on the essays_su_g dataset. |
|
It achieves the following results on the evaluation set: |
|
- Loss: 0.2837 |
|
- B: {'precision': 0.8616822429906542, 'recall': 0.8839884947267498, 'f1-score': 0.872692853762423, 'support': 1043.0} |
|
- I: {'precision': 0.9506446299767138, 'recall': 0.9647262247838617, 'f1-score': 0.957633664216037, 'support': 17350.0} |
|
- O: {'precision': 0.9322299261910088, 'recall': 0.9035334923043572, 'f1-score': 0.9176574196389256, 'support': 9226.0} |
|
- Accuracy: 0.9412 |
|
- Macro avg: {'precision': 0.9148522663861257, 'recall': 0.9174160706049896, 'f1-score': 0.9159946458724618, 'support': 27619.0} |
|
- Weighted avg: {'precision': 0.9411337198513156, 'recall': 0.9412361055794923, 'f1-score': 0.9410720907422853, 'support': 27619.0} |
|
|
|
## Model description |
|
|
|
More information needed |
|
|
|
## Intended uses & limitations |
|
|
|
More information needed |
|
|
|
## Training and evaluation data |
|
|
|
More information needed |
|
|
|
## Training procedure |
|
|
|
### Training hyperparameters |
|
|
|
The following hyperparameters were used during training: |
|
- learning_rate: 2e-05 |
|
- train_batch_size: 8 |
|
- eval_batch_size: 8 |
|
- seed: 42 |
|
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 |
|
- lr_scheduler_type: linear |
|
- num_epochs: 15 |
|
|
|
### Training results |
|
|
|
| Training Loss | Epoch | Step | Validation Loss | B | I | O | Accuracy | Macro avg | Weighted avg | |
|
|:-------------:|:-----:|:----:|:---------------:|:-------------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------------------:|:------------------------------------------------------------------------------------------------------------------:|:--------:|:-------------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------------------:| |
|
| No log | 1.0 | 41 | 0.2927 | {'precision': 0.8069852941176471, 'recall': 0.42090124640460214, 'f1-score': 0.5532451165721487, 'support': 1043.0} | {'precision': 0.8852390417407678, 'recall': 0.9754466858789625, 'f1-score': 0.9281561917297357, 'support': 17350.0} | {'precision': 0.9349000879728541, 'recall': 0.8063082592672881, 'f1-score': 0.8658557876971424, 'support': 9226.0} | 0.8980 | {'precision': 0.8757081412770896, 'recall': 0.7342187305169509, 'f1-score': 0.7824190319996757, 'support': 27619.0} | {'precision': 0.8988729225389979, 'recall': 0.8980049965603389, 'f1-score': 0.8931869394398603, 'support': 27619.0} | |
|
| No log | 2.0 | 82 | 0.1958 | {'precision': 0.7986171132238548, 'recall': 0.8859060402684564, 'f1-score': 0.84, 'support': 1043.0} | {'precision': 0.9361619307123394, 'recall': 0.9703170028818444, 'f1-score': 0.9529335182407381, 'support': 17350.0} | {'precision': 0.9455124425050124, 'recall': 0.8689572946022112, 'f1-score': 0.905619881389438, 'support': 9226.0} | 0.9333 | {'precision': 0.8934304954804023, 'recall': 0.908393445917504, 'f1-score': 0.8995177998767253, 'support': 27619.0} | {'precision': 0.9340912032116592, 'recall': 0.933270574604439, 'f1-score': 0.932863809956036, 'support': 27619.0} | |
|
| No log | 3.0 | 123 | 0.1754 | {'precision': 0.8552631578947368, 'recall': 0.87248322147651, 'f1-score': 0.8637873754152824, 'support': 1043.0} | {'precision': 0.966759166322253, 'recall': 0.9437463976945245, 'f1-score': 0.9551141832181294, 'support': 17350.0} | {'precision': 0.8988355167394468, 'recall': 0.9370257966616085, 'f1-score': 0.9175334323922734, 'support': 9226.0} | 0.9388 | {'precision': 0.9069526136521455, 'recall': 0.9177518052775477, 'f1-score': 0.9121449970085617, 'support': 27619.0} | {'precision': 0.9398590639347346, 'recall': 0.9388102393279989, 'f1-score': 0.9391116535227126, 'support': 27619.0} | |
|
| No log | 4.0 | 164 | 0.1844 | {'precision': 0.861003861003861, 'recall': 0.8552253116011506, 'f1-score': 0.8581048581048581, 'support': 1043.0} | {'precision': 0.9428187016481668, 'recall': 0.9693371757925072, 'f1-score': 0.9558940547914061, 'support': 17350.0} | {'precision': 0.9376786735277302, 'recall': 0.8887925428137872, 'f1-score': 0.912581381114017, 'support': 9226.0} | 0.9381 | {'precision': 0.9138337453932527, 'recall': 0.904451676735815, 'f1-score': 0.908860098003427, 'support': 27619.0} | {'precision': 0.9380120548386821, 'recall': 0.9381223071074261, 'f1-score': 0.937732757876541, 'support': 27619.0} | |
|
| No log | 5.0 | 205 | 0.2030 | {'precision': 0.8463611859838275, 'recall': 0.9031639501438159, 'f1-score': 0.8738404452690166, 'support': 1043.0} | {'precision': 0.9367116741679169, 'recall': 0.9716426512968299, 'f1-score': 0.9538574702237813, 'support': 17350.0} | {'precision': 0.9452344576330943, 'recall': 0.8717754172989378, 'f1-score': 0.9070200169157033, 'support': 9226.0} | 0.9357 | {'precision': 0.9094357725949461, 'recall': 0.9155273395798611, 'f1-score': 0.9115726441361671, 'support': 27619.0} | {'precision': 0.9361466877844027, 'recall': 0.9356964408559325, 'f1-score': 0.9351898826482664, 'support': 27619.0} | |
|
| No log | 6.0 | 246 | 0.1880 | {'precision': 0.8593012275731823, 'recall': 0.87248322147651, 'f1-score': 0.8658420551855375, 'support': 1043.0} | {'precision': 0.9416148372275452, 'recall': 0.9685878962536023, 'f1-score': 0.954910929908799, 'support': 17350.0} | {'precision': 0.9369907035464249, 'recall': 0.8848905267721656, 'f1-score': 0.9101956630804393, 'support': 9226.0} | 0.9370 | {'precision': 0.9126355894490508, 'recall': 0.9086538815007593, 'f1-score': 0.9103162160582586, 'support': 27619.0} | {'precision': 0.9369616871420418, 'recall': 0.9369998913791231, 'f1-score': 0.9366104162010322, 'support': 27619.0} | |
|
| No log | 7.0 | 287 | 0.1950 | {'precision': 0.8525345622119815, 'recall': 0.8868648130393096, 'f1-score': 0.8693609022556391, 'support': 1043.0} | {'precision': 0.9470030477480528, 'recall': 0.9670893371757925, 'f1-score': 0.9569408007300102, 'support': 17350.0} | {'precision': 0.9362522686025408, 'recall': 0.8946455668762194, 'f1-score': 0.9149761667220929, 'support': 9226.0} | 0.9399 | {'precision': 0.9119299595208584, 'recall': 0.9161999056971072, 'f1-score': 0.9137592899025807, 'support': 27619.0} | {'precision': 0.9398443048967325, 'recall': 0.9398602411383468, 'f1-score': 0.939615352760648, 'support': 27619.0} | |
|
| No log | 8.0 | 328 | 0.2260 | {'precision': 0.8517495395948435, 'recall': 0.8868648130393096, 'f1-score': 0.868952559887271, 'support': 1043.0} | {'precision': 0.933457985041795, 'recall': 0.978328530259366, 'f1-score': 0.955366691056453, 'support': 17350.0} | {'precision': 0.9556833153671098, 'recall': 0.8648384998916107, 'f1-score': 0.9079943100995733, 'support': 9226.0} | 0.9370 | {'precision': 0.9136302800012494, 'recall': 0.9100106143967621, 'f1-score': 0.9107711870144325, 'support': 27619.0} | {'precision': 0.9377966283301177, 'recall': 0.9369636844201455, 'f1-score': 0.9362788339465783, 'support': 27619.0} | |
|
| No log | 9.0 | 369 | 0.2217 | {'precision': 0.8499079189686924, 'recall': 0.8849472674976031, 'f1-score': 0.8670737435415689, 'support': 1043.0} | {'precision': 0.9531535648994516, 'recall': 0.9616138328530259, 'f1-score': 0.9573650083204224, 'support': 17350.0} | {'precision': 0.927455975191051, 'recall': 0.9076522870149577, 'f1-score': 0.9174472747192549, 'support': 9226.0} | 0.9407 | {'precision': 0.910172486353065, 'recall': 0.9180711291218623, 'f1-score': 0.9139620088604153, 'support': 27619.0} | {'precision': 0.9406704492415535, 'recall': 0.9406930011948297, 'f1-score': 0.9406209263707241, 'support': 27619.0} | |
|
| No log | 10.0 | 410 | 0.2663 | {'precision': 0.8574091332712023, 'recall': 0.8820709491850431, 'f1-score': 0.8695652173913044, 'support': 1043.0} | {'precision': 0.9361054205193511, 'recall': 0.9744668587896254, 'f1-score': 0.9549010194572307, 'support': 17350.0} | {'precision': 0.9483794932233353, 'recall': 0.8722089746368957, 'f1-score': 0.9087008074078257, 'support': 9226.0} | 0.9368 | {'precision': 0.9139646823379629, 'recall': 0.9095822608705214, 'f1-score': 0.9110556814187869, 'support': 27619.0} | {'precision': 0.937233642655096, 'recall': 0.9368188565842355, 'f1-score': 0.9362454418504176, 'support': 27619.0} | |
|
| No log | 11.0 | 451 | 0.2752 | {'precision': 0.8570110701107011, 'recall': 0.8906999041227229, 'f1-score': 0.8735307945463094, 'support': 1043.0} | {'precision': 0.9348246340789838, 'recall': 0.9755043227665706, 'f1-score': 0.954731349598082, 'support': 17350.0} | {'precision': 0.9505338078291815, 'recall': 0.8685237372642532, 'f1-score': 0.9076801087449027, 'support': 9226.0} | 0.9366 | {'precision': 0.9141231706729555, 'recall': 0.9115759880511822, 'f1-score': 0.911980750963098, 'support': 27619.0} | {'precision': 0.9371336709666482, 'recall': 0.9365654078713929, 'f1-score': 0.9359476526130199, 'support': 27619.0} | |
|
| No log | 12.0 | 492 | 0.2662 | {'precision': 0.8555657773689053, 'recall': 0.8916586768935763, 'f1-score': 0.8732394366197183, 'support': 1043.0} | {'precision': 0.9461304151624549, 'recall': 0.9667435158501441, 'f1-score': 0.9563259022749302, 'support': 17350.0} | {'precision': 0.9358246251703771, 'recall': 0.8930197268588771, 'f1-score': 0.9139212423738213, 'support': 9226.0} | 0.9393 | {'precision': 0.9125069392339125, 'recall': 0.9171406398675325, 'f1-score': 0.9144955270894899, 'support': 27619.0} | {'precision': 0.9392677432450942, 'recall': 0.9392809297947066, 'f1-score': 0.9390231550383894, 'support': 27619.0} | |
|
| 0.1232 | 13.0 | 533 | 0.2681 | {'precision': 0.8646895273401297, 'recall': 0.8945349952061361, 'f1-score': 0.8793590951932139, 'support': 1043.0} | {'precision': 0.9548364966841985, 'recall': 0.9626512968299712, 'f1-score': 0.9587279719878308, 'support': 17350.0} | {'precision': 0.9293766578249337, 'recall': 0.9114459137220897, 'f1-score': 0.9203239575352961, 'support': 9226.0} | 0.9430 | {'precision': 0.9163008939497539, 'recall': 0.9228774019193989, 'f1-score': 0.9194703415721136, 'support': 27619.0} | {'precision': 0.9429274571700438, 'recall': 0.9429740396104132, 'f1-score': 0.9429020124731535, 'support': 27619.0} | |
|
| 0.1232 | 14.0 | 574 | 0.2835 | {'precision': 0.8643592142188962, 'recall': 0.8859060402684564, 'f1-score': 0.875, 'support': 1043.0} | {'precision': 0.9461283248045886, 'recall': 0.9697406340057637, 'f1-score': 0.9577889733299177, 'support': 17350.0} | {'precision': 0.9405726018022128, 'recall': 0.8937784522003035, 'f1-score': 0.9165786694825766, 'support': 9226.0} | 0.9412 | {'precision': 0.9170200469418992, 'recall': 0.9164750421581745, 'f1-score': 0.9164558809374981, 'support': 27619.0} | {'precision': 0.9411845439739722, 'recall': 0.9411998986205149, 'f1-score': 0.9408964297013043, 'support': 27619.0} | |
|
| 0.1232 | 15.0 | 615 | 0.2837 | {'precision': 0.8616822429906542, 'recall': 0.8839884947267498, 'f1-score': 0.872692853762423, 'support': 1043.0} | {'precision': 0.9506446299767138, 'recall': 0.9647262247838617, 'f1-score': 0.957633664216037, 'support': 17350.0} | {'precision': 0.9322299261910088, 'recall': 0.9035334923043572, 'f1-score': 0.9176574196389256, 'support': 9226.0} | 0.9412 | {'precision': 0.9148522663861257, 'recall': 0.9174160706049896, 'f1-score': 0.9159946458724618, 'support': 27619.0} | {'precision': 0.9411337198513156, 'recall': 0.9412361055794923, 'f1-score': 0.9410720907422853, 'support': 27619.0} | |
|
|
|
|
|
### Framework versions |
|
|
|
- Transformers 4.37.2 |
|
- Pytorch 2.2.0+cu121 |
|
- Datasets 2.17.0 |
|
- Tokenizers 0.15.2 |
|
|