|
--- |
|
license: apache-2.0 |
|
base_model: allenai/longformer-base-4096 |
|
tags: |
|
- generated_from_trainer |
|
datasets: |
|
- essays_su_g |
|
metrics: |
|
- accuracy |
|
model-index: |
|
- name: longformer-simple |
|
results: |
|
- task: |
|
name: Token Classification |
|
type: token-classification |
|
dataset: |
|
name: essays_su_g |
|
type: essays_su_g |
|
config: simple |
|
split: train[80%:100%] |
|
args: simple |
|
metrics: |
|
- name: Accuracy |
|
type: accuracy |
|
value: 0.8428617980375829 |
|
--- |
|
|
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You |
|
should probably proofread and complete it, then remove this comment. --> |
|
|
|
# longformer-simple |
|
|
|
This model is a fine-tuned version of [allenai/longformer-base-4096](https://huggingface.co/allenai/longformer-base-4096) on the essays_su_g dataset. |
|
It achieves the following results on the evaluation set: |
|
- Loss: 0.5674 |
|
- Claim: {'precision': 0.6009334889148191, 'recall': 0.6178023032629558, 'f1-score': 0.6092511534366496, 'support': 4168.0} |
|
- Majorclaim: {'precision': 0.7783711615487316, 'recall': 0.8127323420074349, 'f1-score': 0.7951807228915663, 'support': 2152.0} |
|
- O: {'precision': 0.9334009465855307, 'recall': 0.8977888575764145, 'f1-score': 0.9152486187845303, 'support': 9226.0} |
|
- Premise: {'precision': 0.8738229755178908, 'recall': 0.8839559347303901, 'f1-score': 0.8788602487029565, 'support': 12073.0} |
|
- Accuracy: 0.8429 |
|
- Macro avg: {'precision': 0.7966321431417431, 'recall': 0.8030698593942989, 'f1-score': 0.7996351859539257, 'support': 27619.0} |
|
- Weighted avg: {'precision': 0.8451054505259219, 'recall': 0.8428617980375829, 'f1-score': 0.8438086557327736, 'support': 27619.0} |
|
|
|
## Model description |
|
|
|
More information needed |
|
|
|
## Intended uses & limitations |
|
|
|
More information needed |
|
|
|
## Training and evaluation data |
|
|
|
More information needed |
|
|
|
## Training procedure |
|
|
|
### Training hyperparameters |
|
|
|
The following hyperparameters were used during training: |
|
- learning_rate: 2e-05 |
|
- train_batch_size: 8 |
|
- eval_batch_size: 8 |
|
- seed: 42 |
|
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 |
|
- lr_scheduler_type: linear |
|
- num_epochs: 12 |
|
|
|
### Training results |
|
|
|
| Training Loss | Epoch | Step | Validation Loss | Claim | Majorclaim | O | Premise | Accuracy | Macro avg | Weighted avg | |
|
|:-------------:|:-----:|:----:|:---------------:|:------------------------------------------------------------------------------------------------------------------:|:------------------------------------------------------------------------------------------------------------------:|:------------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------------------:|:--------:|:-------------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------------------:| |
|
| No log | 1.0 | 41 | 0.5695 | {'precision': 0.4966580976863753, 'recall': 0.2317658349328215, 'f1-score': 0.3160477670538197, 'support': 4168.0} | {'precision': 0.5313791807591132, 'recall': 0.6570631970260223, 'f1-score': 0.5875753168501975, 'support': 2152.0} | {'precision': 0.9187296220263254, 'recall': 0.8246260567960113, 'f1-score': 0.8691380590620894, 'support': 9226.0} | {'precision': 0.7775590551181102, 'recall': 0.9488113973328915, 'f1-score': 0.8546912889386308, 'support': 12073.0} | 0.7764 | {'precision': 0.681081488897481, 'recall': 0.6655666215219367, 'f1-score': 0.6568631079761843, 'support': 27619.0} | {'precision': 0.7631438109057622, 'recall': 0.7763858213548644, 'f1-score': 0.7574171707594364, 'support': 27619.0} | |
|
| No log | 2.0 | 82 | 0.4419 | {'precision': 0.6064395123476086, 'recall': 0.4654510556621881, 'f1-score': 0.5266730012216642, 'support': 4168.0} | {'precision': 0.7129909365558912, 'recall': 0.7676579925650557, 'f1-score': 0.7393152830610875, 'support': 2152.0} | {'precision': 0.9188125491959969, 'recall': 0.885649252113592, 'f1-score': 0.9019261548650588, 'support': 9226.0} | {'precision': 0.8368660105980318, 'recall': 0.9156796156713327, 'f1-score': 0.8745006526124274, 'support': 12073.0} | 0.8262 | {'precision': 0.7687772521743822, 'recall': 0.7586094790030422, 'f1-score': 0.7606037729400594, 'support': 27619.0} | {'precision': 0.8198140522019413, 'recall': 0.8261703899489482, 'f1-score': 0.8206378450347307, 'support': 27619.0} | |
|
| No log | 3.0 | 123 | 0.4306 | {'precision': 0.5615019421665948, 'recall': 0.6242802303262955, 'f1-score': 0.5912292660758918, 'support': 4168.0} | {'precision': 0.716514954486346, 'recall': 0.7681226765799256, 'f1-score': 0.7414218434626599, 'support': 2152.0} | {'precision': 0.9523123123123123, 'recall': 0.8593106438326469, 'f1-score': 0.9034243063073331, 'support': 9226.0} | {'precision': 0.8682911033756983, 'recall': 0.8884287252547006, 'f1-score': 0.8782444935724228, 'support': 12073.0} | 0.8295 | {'precision': 0.7746550780852379, 'recall': 0.7850355689983921, 'f1-score': 0.778579977354577, 'support': 27619.0} | {'precision': 0.8382342648703133, 'recall': 0.8294652232159021, 'f1-score': 0.8326811908116615, 'support': 27619.0} | |
|
| No log | 4.0 | 164 | 0.4218 | {'precision': 0.6392779333955805, 'recall': 0.4928023032629559, 'f1-score': 0.5565641511990246, 'support': 4168.0} | {'precision': 0.7853107344632768, 'recall': 0.775092936802974, 'f1-score': 0.7801683816651075, 'support': 2152.0} | {'precision': 0.9142207299902524, 'recall': 0.9149143724257534, 'f1-score': 0.9145674196868736, 'support': 9226.0} | {'precision': 0.8483408690321097, 'recall': 0.9169220574836412, 'f1-score': 0.8812992596130881, 'support': 12073.0} | 0.8412 | {'precision': 0.7967875667203048, 'recall': 0.7749329174938311, 'f1-score': 0.7831498030410234, 'support': 27619.0} | {'precision': 0.8338867769894811, 'recall': 0.8411962779246172, 'f1-score': 0.8355265112741501, 'support': 27619.0} | |
|
| No log | 5.0 | 205 | 0.4481 | {'precision': 0.5631399317406144, 'recall': 0.6729846449136276, 'f1-score': 0.6131817684992896, 'support': 4168.0} | {'precision': 0.7194693718298869, 'recall': 0.8568773234200744, 'f1-score': 0.782184517497349, 'support': 2152.0} | {'precision': 0.9297755945070895, 'recall': 0.9026663776284414, 'f1-score': 0.9160204586701864, 'support': 9226.0} | {'precision': 0.9064579960424537, 'recall': 0.8347552389629752, 'f1-score': 0.8691302660514854, 'support': 12073.0} | 0.8348 | {'precision': 0.7797107235300111, 'recall': 0.8168208962312796, 'f1-score': 0.7951292526795776, 'support': 27619.0} | {'precision': 0.8478671329452823, 'recall': 0.8347514392266193, 'f1-score': 0.839393792189799, 'support': 27619.0} | |
|
| No log | 6.0 | 246 | 0.4486 | {'precision': 0.5991150442477876, 'recall': 0.6497120921305183, 'f1-score': 0.6233885819521179, 'support': 4168.0} | {'precision': 0.7814183123877917, 'recall': 0.8090148698884758, 'f1-score': 0.7949771689497717, 'support': 2152.0} | {'precision': 0.917592492719232, 'recall': 0.9220680685020594, 'f1-score': 0.9198248364599665, 'support': 9226.0} | {'precision': 0.8957758620689655, 'recall': 0.860680858113145, 'f1-score': 0.8778777510243738, 'support': 12073.0} | 0.8453 | {'precision': 0.7984754278559442, 'recall': 0.8103689721585496, 'f1-score': 0.8040170845965575, 'support': 27619.0} | {'precision': 0.8493839035906282, 'recall': 0.8453238712480539, 'f1-score': 0.8470254718292933, 'support': 27619.0} | |
|
| No log | 7.0 | 287 | 0.4938 | {'precision': 0.6172607879924953, 'recall': 0.5525431861804223, 'f1-score': 0.5831117863020636, 'support': 4168.0} | {'precision': 0.8030447193149381, 'recall': 0.7843866171003717, 'f1-score': 0.7936060178655383, 'support': 2152.0} | {'precision': 0.925684628975265, 'recall': 0.9086277910253631, 'f1-score': 0.9170769062465813, 'support': 9226.0} | {'precision': 0.8585231736056559, 'recall': 0.9052431044479416, 'f1-score': 0.8812643631818732, 'support': 12073.0} | 0.8437 | {'precision': 0.8011283274720886, 'recall': 0.7877001746885246, 'f1-score': 0.7937647683990142, 'support': 27619.0} | {'precision': 0.840226360917678, 'recall': 0.8437307650530432, 'f1-score': 0.8414028845895708, 'support': 27619.0} | |
|
| No log | 8.0 | 328 | 0.5387 | {'precision': 0.5799445471349353, 'recall': 0.6022072936660269, 'f1-score': 0.5908662900188324, 'support': 4168.0} | {'precision': 0.7539513028620248, 'recall': 0.8201672862453532, 'f1-score': 0.7856665924771868, 'support': 2152.0} | {'precision': 0.9451071221771858, 'recall': 0.8845653587686971, 'f1-score': 0.9138346117238675, 'support': 9226.0} | {'precision': 0.8677222898903776, 'recall': 0.8851155470885447, 'f1-score': 0.8763326226012793, 'support': 12073.0} | 0.8372 | {'precision': 0.7866813155161309, 'recall': 0.7980138714421554, 'f1-score': 0.7916750292052914, 'support': 27619.0} | {'precision': 0.8412788874061601, 'recall': 0.8371773054781129, 'f1-score': 0.8387156335942302, 'support': 27619.0} | |
|
| No log | 9.0 | 369 | 0.5456 | {'precision': 0.5713418336369156, 'recall': 0.6773032629558541, 'f1-score': 0.6198265451751016, 'support': 4168.0} | {'precision': 0.7620881471972615, 'recall': 0.8276022304832714, 'f1-score': 0.7934952105145913, 'support': 2152.0} | {'precision': 0.9396749084249084, 'recall': 0.8897680468241925, 'f1-score': 0.9140407527001447, 'support': 9226.0} | {'precision': 0.8894442050840156, 'recall': 0.8549656257765261, 'f1-score': 0.8718641777177126, 'support': 12073.0} | 0.8376 | {'precision': 0.7906372735857753, 'recall': 0.8124097915099611, 'f1-score': 0.7998066715268876, 'support': 27619.0} | {'precision': 0.848295269505583, 'recall': 0.8376479959448206, 'f1-score': 0.8418116128503821, 'support': 27619.0} | |
|
| No log | 10.0 | 410 | 0.5597 | {'precision': 0.6104051786142412, 'recall': 0.6108445297504799, 'f1-score': 0.610624775152896, 'support': 4168.0} | {'precision': 0.7580782312925171, 'recall': 0.8285315985130112, 'f1-score': 0.7917406749555951, 'support': 2152.0} | {'precision': 0.9353891336270191, 'recall': 0.8975720789074355, 'f1-score': 0.9160904917307374, 'support': 9226.0} | {'precision': 0.876255819652046, 'recall': 0.8885943841630084, 'f1-score': 0.8823819707188683, 'support': 12073.0} | 0.8450 | {'precision': 0.7950320907964559, 'recall': 0.8063856478334838, 'f1-score': 0.8002094781395241, 'support': 27619.0} | {'precision': 0.8466812627433173, 'recall': 0.8449980086172563, 'f1-score': 0.8455685725239289, 'support': 27619.0} | |
|
| No log | 11.0 | 451 | 0.5618 | {'precision': 0.5962230215827338, 'recall': 0.6362763915547025, 'f1-score': 0.6155988857938719, 'support': 4168.0} | {'precision': 0.7545803152961227, 'recall': 0.8229553903345725, 'f1-score': 0.7872860635696821, 'support': 2152.0} | {'precision': 0.9359309326366012, 'recall': 0.8930197268588771, 'f1-score': 0.9139719341061624, 'support': 9226.0} | {'precision': 0.880459196406289, 'recall': 0.8766669427648471, 'f1-score': 0.8785589773387565, 'support': 12073.0} | 0.8417 | {'precision': 0.7917983664804367, 'recall': 0.8072296128782498, 'f1-score': 0.7988539652021183, 'support': 27619.0} | {'precision': 0.8462868697343314, 'recall': 0.8416669683913248, 'f1-score': 0.8435933003463223, 'support': 27619.0} | |
|
| No log | 12.0 | 492 | 0.5674 | {'precision': 0.6009334889148191, 'recall': 0.6178023032629558, 'f1-score': 0.6092511534366496, 'support': 4168.0} | {'precision': 0.7783711615487316, 'recall': 0.8127323420074349, 'f1-score': 0.7951807228915663, 'support': 2152.0} | {'precision': 0.9334009465855307, 'recall': 0.8977888575764145, 'f1-score': 0.9152486187845303, 'support': 9226.0} | {'precision': 0.8738229755178908, 'recall': 0.8839559347303901, 'f1-score': 0.8788602487029565, 'support': 12073.0} | 0.8429 | {'precision': 0.7966321431417431, 'recall': 0.8030698593942989, 'f1-score': 0.7996351859539257, 'support': 27619.0} | {'precision': 0.8451054505259219, 'recall': 0.8428617980375829, 'f1-score': 0.8438086557327736, 'support': 27619.0} | |
|
|
|
|
|
### Framework versions |
|
|
|
- Transformers 4.37.2 |
|
- Pytorch 2.2.0+cu121 |
|
- Datasets 2.17.0 |
|
- Tokenizers 0.15.2 |
|
|