Theoreticallyhugo
commited on
Commit
•
b349f2c
1
Parent(s):
e689d7a
trainer: training complete at 2024-03-02 11:55:54.389485.
Browse files- README.md +26 -25
- meta_data/README_s42_e16.md +95 -0
- meta_data/meta_s42_e16_cvi0.json +1 -1
- model.safetensors +1 -1
README.md
CHANGED
@@ -17,12 +17,12 @@ model-index:
|
|
17 |
name: essays_su_g
|
18 |
type: essays_su_g
|
19 |
config: spans
|
20 |
-
split: train[
|
21 |
args: spans
|
22 |
metrics:
|
23 |
- name: Accuracy
|
24 |
type: accuracy
|
25 |
-
value: 0.
|
26 |
---
|
27 |
|
28 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
@@ -32,13 +32,13 @@ should probably proofread and complete it, then remove this comment. -->
|
|
32 |
|
33 |
This model is a fine-tuned version of [allenai/longformer-base-4096](https://huggingface.co/allenai/longformer-base-4096) on the essays_su_g dataset.
|
34 |
It achieves the following results on the evaluation set:
|
35 |
-
- Loss: 0.
|
36 |
-
- B: {'precision': 0.
|
37 |
-
- I: {'precision': 0.
|
38 |
-
- O: {'precision': 0.
|
39 |
-
- Accuracy: 0.
|
40 |
-
- Macro avg: {'precision': 0.
|
41 |
-
- Weighted avg: {'precision': 0.
|
42 |
|
43 |
## Model description
|
44 |
|
@@ -63,27 +63,28 @@ The following hyperparameters were used during training:
|
|
63 |
- seed: 42
|
64 |
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
|
65 |
- lr_scheduler_type: linear
|
66 |
-
- num_epochs:
|
67 |
|
68 |
### Training results
|
69 |
|
70 |
| Training Loss | Epoch | Step | Validation Loss | B | I | O | Accuracy | Macro avg | Weighted avg |
|
71 |
|:-------------:|:-----:|:----:|:---------------:|:-------------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------------------:|:------------------------------------------------------------------------------------------------------------------:|:--------:|:-------------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------------------:|
|
72 |
-
| No log | 1.0 | 41 | 0.
|
73 |
-
| No log | 2.0 | 82 | 0.
|
74 |
-
| No log | 3.0 | 123 | 0.
|
75 |
-
| No log | 4.0 | 164 | 0.
|
76 |
-
| No log | 5.0 | 205 | 0.
|
77 |
-
| No log | 6.0 | 246 | 0.
|
78 |
-
| No log | 7.0 | 287 | 0.
|
79 |
-
| No log | 8.0 | 328 | 0.
|
80 |
-
| No log | 9.0 | 369 | 0.
|
81 |
-
| No log | 10.0 | 410 | 0.
|
82 |
-
| No log | 11.0 | 451 | 0.
|
83 |
-
| No log | 12.0 | 492 | 0.
|
84 |
-
| 0.
|
85 |
-
| 0.
|
86 |
-
| 0.
|
|
|
87 |
|
88 |
|
89 |
### Framework versions
|
|
|
17 |
name: essays_su_g
|
18 |
type: essays_su_g
|
19 |
config: spans
|
20 |
+
split: train[0%:20%]
|
21 |
args: spans
|
22 |
metrics:
|
23 |
- name: Accuracy
|
24 |
type: accuracy
|
25 |
+
value: 0.9266721210881571
|
26 |
---
|
27 |
|
28 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
|
|
32 |
|
33 |
This model is a fine-tuned version of [allenai/longformer-base-4096](https://huggingface.co/allenai/longformer-base-4096) on the essays_su_g dataset.
|
34 |
It achieves the following results on the evaluation set:
|
35 |
+
- Loss: 0.4247
|
36 |
+
- B: {'precision': 0.8405063291139241, 'recall': 0.8790820829655781, 'f1-score': 0.8593615185504745, 'support': 1133.0}
|
37 |
+
- I: {'precision': 0.9346315063405316, 'recall': 0.960835651557301, 'f1-score': 0.9475524475524475, 'support': 18333.0}
|
38 |
+
- O: {'precision': 0.9215222532788647, 'recall': 0.8686663964329144, 'f1-score': 0.8943140323422013, 'support': 9868.0}
|
39 |
+
- Accuracy: 0.9267
|
40 |
+
- Macro avg: {'precision': 0.8988866962444403, 'recall': 0.9028613769852646, 'f1-score': 0.9004093328150411, 'support': 29334.0}
|
41 |
+
- Weighted avg: {'precision': 0.9265860323168638, 'recall': 0.9266721210881571, 'f1-score': 0.926236670506905, 'support': 29334.0}
|
42 |
|
43 |
## Model description
|
44 |
|
|
|
63 |
- seed: 42
|
64 |
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
|
65 |
- lr_scheduler_type: linear
|
66 |
+
- num_epochs: 16
|
67 |
|
68 |
### Training results
|
69 |
|
70 |
| Training Loss | Epoch | Step | Validation Loss | B | I | O | Accuracy | Macro avg | Weighted avg |
|
71 |
|:-------------:|:-----:|:----:|:---------------:|:-------------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------------------:|:------------------------------------------------------------------------------------------------------------------:|:--------:|:-------------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------------------:|
|
72 |
+
| No log | 1.0 | 41 | 0.3607 | {'precision': 0.8202247191011236, 'recall': 0.19329214474845544, 'f1-score': 0.3128571428571429, 'support': 1133.0} | {'precision': 0.8412101850981866, 'recall': 0.9767086674303169, 'f1-score': 0.9039097402761301, 'support': 18333.0} | {'precision': 0.9249453797712376, 'recall': 0.7293271179570329, 'f1-score': 0.8155702872684006, 'support': 9868.0} | 0.8632 | {'precision': 0.8621267613235158, 'recall': 0.6331093100452684, 'f1-score': 0.6774457234672245, 'support': 29334.0} | {'precision': 0.8685682804162133, 'recall': 0.8632303811277017, 'f1-score': 0.8513633328596172, 'support': 29334.0} |
|
73 |
+
| No log | 2.0 | 82 | 0.2532 | {'precision': 0.7996755879967559, 'recall': 0.8702559576345984, 'f1-score': 0.8334742180896026, 'support': 1133.0} | {'precision': 0.9331675137882557, 'recall': 0.9413625702285496, 'f1-score': 0.937247128465528, 'support': 18333.0} | {'precision': 0.8902883314250026, 'recall': 0.8667409809485205, 'f1-score': 0.878356867779204, 'support': 9868.0} | 0.9135 | {'precision': 0.874377144403338, 'recall': 0.892786502937223, 'f1-score': 0.8830260714447782, 'support': 29334.0} | {'precision': 0.9135868864110707, 'recall': 0.9135133292425173, 'f1-score': 0.9134282220801536, 'support': 29334.0} |
|
74 |
+
| No log | 3.0 | 123 | 0.2280 | {'precision': 0.8041074249605056, 'recall': 0.8984995586937334, 'f1-score': 0.8486869528970404, 'support': 1133.0} | {'precision': 0.9231209660628774, 'recall': 0.9673812251131839, 'f1-score': 0.9447329870821681, 'support': 18333.0} | {'precision': 0.9356368563685636, 'recall': 0.8396838265099311, 'f1-score': 0.8850672933133945, 'support': 9868.0} | 0.9218 | {'precision': 0.8876217491306488, 'recall': 0.9018548701056162, 'f1-score': 0.8928290777642011, 'support': 29334.0} | {'precision': 0.9227345360999514, 'recall': 0.9217631417467785, 'f1-score': 0.9209516676970857, 'support': 29334.0} |
|
75 |
+
| No log | 4.0 | 164 | 0.2643 | {'precision': 0.8111111111111111, 'recall': 0.9020300088261254, 'f1-score': 0.854157960718763, 'support': 1133.0} | {'precision': 0.9180539091893006, 'recall': 0.9716358479245077, 'f1-score': 0.9440852236591055, 'support': 18333.0} | {'precision': 0.9426825049013955, 'recall': 0.8283340089177138, 'f1-score': 0.8818167107179459, 'support': 9868.0} | 0.9207 | {'precision': 0.8906158417339357, 'recall': 0.9006666218894489, 'f1-score': 0.8933532983652714, 'support': 29334.0} | {'precision': 0.9222084326864153, 'recall': 0.9207404377173246, 'f1-score': 0.9196646443104053, 'support': 29334.0} |
|
76 |
+
| No log | 5.0 | 205 | 0.2475 | {'precision': 0.8158526821457166, 'recall': 0.8993821712268314, 'f1-score': 0.8555835432409741, 'support': 1133.0} | {'precision': 0.9278074866310161, 'recall': 0.965308460153821, 'f1-score': 0.9461865426257119, 'support': 18333.0} | {'precision': 0.9316391077571856, 'recall': 0.8507296311309283, 'f1-score': 0.8893479527517347, 'support': 9868.0} | 0.9242 | {'precision': 0.8917664255113061, 'recall': 0.9051400875038601, 'f1-score': 0.8970393462061402, 'support': 29334.0} | {'precision': 0.924772293469197, 'recall': 0.9242176314174678, 'f1-score': 0.9235664975183513, 'support': 29334.0} |
|
77 |
+
| No log | 6.0 | 246 | 0.2554 | {'precision': 0.8058176100628931, 'recall': 0.9046778464254193, 'f1-score': 0.8523908523908524, 'support': 1133.0} | {'precision': 0.9404408990459764, 'recall': 0.951726395025364, 'f1-score': 0.946049991866833, 'support': 18333.0} | {'precision': 0.9114523083394679, 'recall': 0.8782934738548844, 'f1-score': 0.894565722248026, 'support': 9868.0} | 0.9252 | {'precision': 0.8859036058161124, 'recall': 0.9115659051018893, 'f1-score': 0.8976688555019038, 'support': 29334.0} | {'precision': 0.9254893888697421, 'recall': 0.9252062453126065, 'f1-score': 0.9251131071042821, 'support': 29334.0} |
|
78 |
+
| No log | 7.0 | 287 | 0.2822 | {'precision': 0.8323746918652424, 'recall': 0.8940864960282436, 'f1-score': 0.8621276595744681, 'support': 1133.0} | {'precision': 0.9353455123113582, 'recall': 0.9635084274259532, 'f1-score': 0.9492181202643882, 'support': 18333.0} | {'precision': 0.9287261698440208, 'recall': 0.8688690717470612, 'f1-score': 0.8978010471204189, 'support': 9868.0} | 0.9290 | {'precision': 0.8988154580068738, 'recall': 0.9088213317337527, 'f1-score': 0.9030489423197584, 'support': 29334.0} | {'precision': 0.9291415983878176, 'recall': 0.9289902502215859, 'f1-score': 0.9285575499450874, 'support': 29334.0} |
|
79 |
+
| No log | 8.0 | 328 | 0.3068 | {'precision': 0.8181089743589743, 'recall': 0.9011473962930273, 'f1-score': 0.8576228475430492, 'support': 1133.0} | {'precision': 0.933372111469515, 'recall': 0.9627993236240658, 'f1-score': 0.9478573729996776, 'support': 18333.0} | {'precision': 0.9279564032697548, 'recall': 0.8627888123226591, 'f1-score': 0.8941868403087749, 'support': 9868.0} | 0.9268 | {'precision': 0.8931458296994147, 'recall': 0.9089118440799174, 'f1-score': 0.899889020283834, 'support': 29334.0} | {'precision': 0.9270983219126366, 'recall': 0.9267743914911025, 'f1-score': 0.926317298889901, 'support': 29334.0} |
|
80 |
+
| No log | 9.0 | 369 | 0.3574 | {'precision': 0.8315441783649876, 'recall': 0.8887908208296558, 'f1-score': 0.8592150170648465, 'support': 1133.0} | {'precision': 0.9180683108038387, 'recall': 0.9705994654448262, 'f1-score': 0.9436033408458174, 'support': 18333.0} | {'precision': 0.9387941883079739, 'recall': 0.8315768139440616, 'f1-score': 0.8819388467945618, 'support': 9868.0} | 0.9207 | {'precision': 0.8961355591589334, 'recall': 0.8969890334061811, 'f1-score': 0.8949190682350753, 'support': 29334.0} | {'precision': 0.9216986072911089, 'recall': 0.9206722574486943, 'f1-score': 0.9195998909875767, 'support': 29334.0} |
|
81 |
+
| No log | 10.0 | 410 | 0.3228 | {'precision': 0.8491048593350383, 'recall': 0.8790820829655781, 'f1-score': 0.8638334778837814, 'support': 1133.0} | {'precision': 0.9479900314226893, 'recall': 0.9544537173403153, 'f1-score': 0.9512108939686336, 'support': 18333.0} | {'precision': 0.9123982273523652, 'recall': 0.897142278070531, 'f1-score': 0.9047059424658934, 'support': 9868.0} | 0.9323 | {'precision': 0.9031643727033641, 'recall': 0.9102260261254749, 'f1-score': 0.9065834381061029, 'support': 29334.0} | {'precision': 0.9321975441198576, 'recall': 0.9322629031158383, 'f1-score': 0.9321916850692957, 'support': 29334.0} |
|
82 |
+
| No log | 11.0 | 451 | 0.3397 | {'precision': 0.8524871355060034, 'recall': 0.8773168578993822, 'f1-score': 0.8647237929534581, 'support': 1133.0} | {'precision': 0.941372096765542, 'recall': 0.9572901325478645, 'f1-score': 0.9492643877109477, 'support': 18333.0} | {'precision': 0.9164304461942258, 'recall': 0.8845764085934333, 'f1-score': 0.9002217294900223, 'support': 9868.0} | 0.9297 | {'precision': 0.9034298928219237, 'recall': 0.9063944663468934, 'f1-score': 0.9047366367181428, 'support': 29334.0} | {'precision': 0.9295485858585807, 'recall': 0.9297402331765187, 'f1-score': 0.9295010603371041, 'support': 29334.0} |
|
83 |
+
| No log | 12.0 | 492 | 0.3769 | {'precision': 0.8406040268456376, 'recall': 0.884377758164166, 'f1-score': 0.8619354838709679, 'support': 1133.0} | {'precision': 0.9364758459246648, 'recall': 0.9601265477554137, 'f1-score': 0.9481537342777884, 'support': 18333.0} | {'precision': 0.9215707254440403, 'recall': 0.8728212403729225, 'f1-score': 0.8965337774539398, 'support': 9868.0} | 0.9278 | {'precision': 0.8995501994047809, 'recall': 0.9057751820975007, 'f1-score': 0.9022076652008987, 'support': 29334.0} | {'precision': 0.9277587769971628, 'recall': 0.9278311856548714, 'f1-score': 0.927458601951864, 'support': 29334.0} |
|
84 |
+
| 0.1199 | 13.0 | 533 | 0.4395 | {'precision': 0.8141945773524721, 'recall': 0.9011473962930273, 'f1-score': 0.8554671135316297, 'support': 1133.0} | {'precision': 0.9200891931134619, 'recall': 0.967817596683576, 'f1-score': 0.9433500810803624, 'support': 18333.0} | {'precision': 0.9357662573897226, 'recall': 0.8341102553708958, 'f1-score': 0.8820188598371196, 'support': 9868.0} | 0.9203 | {'precision': 0.8900166759518856, 'recall': 0.9010250827824997, 'f1-score': 0.8936120181497039, 'support': 29334.0} | {'precision': 0.9212728936187097, 'recall': 0.9202631758369128, 'f1-score': 0.9193237671285989, 'support': 29334.0} |
|
85 |
+
| 0.1199 | 14.0 | 574 | 0.4362 | {'precision': 0.8338842975206612, 'recall': 0.8905560458958517, 'f1-score': 0.861288945795988, 'support': 1133.0} | {'precision': 0.9288525106249016, 'recall': 0.9656357388316151, 'f1-score': 0.9468870346598203, 'support': 18333.0} | {'precision': 0.9307225592939878, 'recall': 0.8549858127280098, 'f1-score': 0.8912480853536153, 'support': 9868.0} | 0.9255 | {'precision': 0.8978197891465168, 'recall': 0.9037258658184922, 'f1-score': 0.8998080219364746, 'support': 29334.0} | {'precision': 0.9258135338341278, 'recall': 0.9255130565214427, 'f1-score': 0.9248638606488995, 'support': 29334.0} |
|
86 |
+
| 0.1199 | 15.0 | 615 | 0.4385 | {'precision': 0.8272208638956805, 'recall': 0.8958517210944396, 'f1-score': 0.8601694915254238, 'support': 1133.0} | {'precision': 0.9282784730255548, 'recall': 0.9629629629629629, 'f1-score': 0.9453026692725763, 'support': 18333.0} | {'precision': 0.9263945428539994, 'recall': 0.8532630725577625, 'f1-score': 0.8883262119533682, 'support': 9868.0} | 0.9235 | {'precision': 0.8939646265917448, 'recall': 0.9040259188717217, 'f1-score': 0.8979327909171229, 'support': 29334.0} | {'precision': 0.923741454750616, 'recall': 0.9234676484625349, 'f1-score': 0.9228475124165911, 'support': 29334.0} |
|
87 |
+
| 0.1199 | 16.0 | 656 | 0.4247 | {'precision': 0.8405063291139241, 'recall': 0.8790820829655781, 'f1-score': 0.8593615185504745, 'support': 1133.0} | {'precision': 0.9346315063405316, 'recall': 0.960835651557301, 'f1-score': 0.9475524475524475, 'support': 18333.0} | {'precision': 0.9215222532788647, 'recall': 0.8686663964329144, 'f1-score': 0.8943140323422013, 'support': 9868.0} | 0.9267 | {'precision': 0.8988866962444403, 'recall': 0.9028613769852646, 'f1-score': 0.9004093328150411, 'support': 29334.0} | {'precision': 0.9265860323168638, 'recall': 0.9266721210881571, 'f1-score': 0.926236670506905, 'support': 29334.0} |
|
88 |
|
89 |
|
90 |
### Framework versions
|
meta_data/README_s42_e16.md
ADDED
@@ -0,0 +1,95 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: apache-2.0
|
3 |
+
base_model: allenai/longformer-base-4096
|
4 |
+
tags:
|
5 |
+
- generated_from_trainer
|
6 |
+
datasets:
|
7 |
+
- essays_su_g
|
8 |
+
metrics:
|
9 |
+
- accuracy
|
10 |
+
model-index:
|
11 |
+
- name: longformer-spans
|
12 |
+
results:
|
13 |
+
- task:
|
14 |
+
name: Token Classification
|
15 |
+
type: token-classification
|
16 |
+
dataset:
|
17 |
+
name: essays_su_g
|
18 |
+
type: essays_su_g
|
19 |
+
config: spans
|
20 |
+
split: train[0%:20%]
|
21 |
+
args: spans
|
22 |
+
metrics:
|
23 |
+
- name: Accuracy
|
24 |
+
type: accuracy
|
25 |
+
value: 0.9266721210881571
|
26 |
+
---
|
27 |
+
|
28 |
+
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
29 |
+
should probably proofread and complete it, then remove this comment. -->
|
30 |
+
|
31 |
+
# longformer-spans
|
32 |
+
|
33 |
+
This model is a fine-tuned version of [allenai/longformer-base-4096](https://huggingface.co/allenai/longformer-base-4096) on the essays_su_g dataset.
|
34 |
+
It achieves the following results on the evaluation set:
|
35 |
+
- Loss: 0.4247
|
36 |
+
- B: {'precision': 0.8405063291139241, 'recall': 0.8790820829655781, 'f1-score': 0.8593615185504745, 'support': 1133.0}
|
37 |
+
- I: {'precision': 0.9346315063405316, 'recall': 0.960835651557301, 'f1-score': 0.9475524475524475, 'support': 18333.0}
|
38 |
+
- O: {'precision': 0.9215222532788647, 'recall': 0.8686663964329144, 'f1-score': 0.8943140323422013, 'support': 9868.0}
|
39 |
+
- Accuracy: 0.9267
|
40 |
+
- Macro avg: {'precision': 0.8988866962444403, 'recall': 0.9028613769852646, 'f1-score': 0.9004093328150411, 'support': 29334.0}
|
41 |
+
- Weighted avg: {'precision': 0.9265860323168638, 'recall': 0.9266721210881571, 'f1-score': 0.926236670506905, 'support': 29334.0}
|
42 |
+
|
43 |
+
## Model description
|
44 |
+
|
45 |
+
More information needed
|
46 |
+
|
47 |
+
## Intended uses & limitations
|
48 |
+
|
49 |
+
More information needed
|
50 |
+
|
51 |
+
## Training and evaluation data
|
52 |
+
|
53 |
+
More information needed
|
54 |
+
|
55 |
+
## Training procedure
|
56 |
+
|
57 |
+
### Training hyperparameters
|
58 |
+
|
59 |
+
The following hyperparameters were used during training:
|
60 |
+
- learning_rate: 2e-05
|
61 |
+
- train_batch_size: 8
|
62 |
+
- eval_batch_size: 8
|
63 |
+
- seed: 42
|
64 |
+
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
|
65 |
+
- lr_scheduler_type: linear
|
66 |
+
- num_epochs: 16
|
67 |
+
|
68 |
+
### Training results
|
69 |
+
|
70 |
+
| Training Loss | Epoch | Step | Validation Loss | B | I | O | Accuracy | Macro avg | Weighted avg |
|
71 |
+
|:-------------:|:-----:|:----:|:---------------:|:-------------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------------------:|:------------------------------------------------------------------------------------------------------------------:|:--------:|:-------------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------------------:|
|
72 |
+
| No log | 1.0 | 41 | 0.3607 | {'precision': 0.8202247191011236, 'recall': 0.19329214474845544, 'f1-score': 0.3128571428571429, 'support': 1133.0} | {'precision': 0.8412101850981866, 'recall': 0.9767086674303169, 'f1-score': 0.9039097402761301, 'support': 18333.0} | {'precision': 0.9249453797712376, 'recall': 0.7293271179570329, 'f1-score': 0.8155702872684006, 'support': 9868.0} | 0.8632 | {'precision': 0.8621267613235158, 'recall': 0.6331093100452684, 'f1-score': 0.6774457234672245, 'support': 29334.0} | {'precision': 0.8685682804162133, 'recall': 0.8632303811277017, 'f1-score': 0.8513633328596172, 'support': 29334.0} |
|
73 |
+
| No log | 2.0 | 82 | 0.2532 | {'precision': 0.7996755879967559, 'recall': 0.8702559576345984, 'f1-score': 0.8334742180896026, 'support': 1133.0} | {'precision': 0.9331675137882557, 'recall': 0.9413625702285496, 'f1-score': 0.937247128465528, 'support': 18333.0} | {'precision': 0.8902883314250026, 'recall': 0.8667409809485205, 'f1-score': 0.878356867779204, 'support': 9868.0} | 0.9135 | {'precision': 0.874377144403338, 'recall': 0.892786502937223, 'f1-score': 0.8830260714447782, 'support': 29334.0} | {'precision': 0.9135868864110707, 'recall': 0.9135133292425173, 'f1-score': 0.9134282220801536, 'support': 29334.0} |
|
74 |
+
| No log | 3.0 | 123 | 0.2280 | {'precision': 0.8041074249605056, 'recall': 0.8984995586937334, 'f1-score': 0.8486869528970404, 'support': 1133.0} | {'precision': 0.9231209660628774, 'recall': 0.9673812251131839, 'f1-score': 0.9447329870821681, 'support': 18333.0} | {'precision': 0.9356368563685636, 'recall': 0.8396838265099311, 'f1-score': 0.8850672933133945, 'support': 9868.0} | 0.9218 | {'precision': 0.8876217491306488, 'recall': 0.9018548701056162, 'f1-score': 0.8928290777642011, 'support': 29334.0} | {'precision': 0.9227345360999514, 'recall': 0.9217631417467785, 'f1-score': 0.9209516676970857, 'support': 29334.0} |
|
75 |
+
| No log | 4.0 | 164 | 0.2643 | {'precision': 0.8111111111111111, 'recall': 0.9020300088261254, 'f1-score': 0.854157960718763, 'support': 1133.0} | {'precision': 0.9180539091893006, 'recall': 0.9716358479245077, 'f1-score': 0.9440852236591055, 'support': 18333.0} | {'precision': 0.9426825049013955, 'recall': 0.8283340089177138, 'f1-score': 0.8818167107179459, 'support': 9868.0} | 0.9207 | {'precision': 0.8906158417339357, 'recall': 0.9006666218894489, 'f1-score': 0.8933532983652714, 'support': 29334.0} | {'precision': 0.9222084326864153, 'recall': 0.9207404377173246, 'f1-score': 0.9196646443104053, 'support': 29334.0} |
|
76 |
+
| No log | 5.0 | 205 | 0.2475 | {'precision': 0.8158526821457166, 'recall': 0.8993821712268314, 'f1-score': 0.8555835432409741, 'support': 1133.0} | {'precision': 0.9278074866310161, 'recall': 0.965308460153821, 'f1-score': 0.9461865426257119, 'support': 18333.0} | {'precision': 0.9316391077571856, 'recall': 0.8507296311309283, 'f1-score': 0.8893479527517347, 'support': 9868.0} | 0.9242 | {'precision': 0.8917664255113061, 'recall': 0.9051400875038601, 'f1-score': 0.8970393462061402, 'support': 29334.0} | {'precision': 0.924772293469197, 'recall': 0.9242176314174678, 'f1-score': 0.9235664975183513, 'support': 29334.0} |
|
77 |
+
| No log | 6.0 | 246 | 0.2554 | {'precision': 0.8058176100628931, 'recall': 0.9046778464254193, 'f1-score': 0.8523908523908524, 'support': 1133.0} | {'precision': 0.9404408990459764, 'recall': 0.951726395025364, 'f1-score': 0.946049991866833, 'support': 18333.0} | {'precision': 0.9114523083394679, 'recall': 0.8782934738548844, 'f1-score': 0.894565722248026, 'support': 9868.0} | 0.9252 | {'precision': 0.8859036058161124, 'recall': 0.9115659051018893, 'f1-score': 0.8976688555019038, 'support': 29334.0} | {'precision': 0.9254893888697421, 'recall': 0.9252062453126065, 'f1-score': 0.9251131071042821, 'support': 29334.0} |
|
78 |
+
| No log | 7.0 | 287 | 0.2822 | {'precision': 0.8323746918652424, 'recall': 0.8940864960282436, 'f1-score': 0.8621276595744681, 'support': 1133.0} | {'precision': 0.9353455123113582, 'recall': 0.9635084274259532, 'f1-score': 0.9492181202643882, 'support': 18333.0} | {'precision': 0.9287261698440208, 'recall': 0.8688690717470612, 'f1-score': 0.8978010471204189, 'support': 9868.0} | 0.9290 | {'precision': 0.8988154580068738, 'recall': 0.9088213317337527, 'f1-score': 0.9030489423197584, 'support': 29334.0} | {'precision': 0.9291415983878176, 'recall': 0.9289902502215859, 'f1-score': 0.9285575499450874, 'support': 29334.0} |
|
79 |
+
| No log | 8.0 | 328 | 0.3068 | {'precision': 0.8181089743589743, 'recall': 0.9011473962930273, 'f1-score': 0.8576228475430492, 'support': 1133.0} | {'precision': 0.933372111469515, 'recall': 0.9627993236240658, 'f1-score': 0.9478573729996776, 'support': 18333.0} | {'precision': 0.9279564032697548, 'recall': 0.8627888123226591, 'f1-score': 0.8941868403087749, 'support': 9868.0} | 0.9268 | {'precision': 0.8931458296994147, 'recall': 0.9089118440799174, 'f1-score': 0.899889020283834, 'support': 29334.0} | {'precision': 0.9270983219126366, 'recall': 0.9267743914911025, 'f1-score': 0.926317298889901, 'support': 29334.0} |
|
80 |
+
| No log | 9.0 | 369 | 0.3574 | {'precision': 0.8315441783649876, 'recall': 0.8887908208296558, 'f1-score': 0.8592150170648465, 'support': 1133.0} | {'precision': 0.9180683108038387, 'recall': 0.9705994654448262, 'f1-score': 0.9436033408458174, 'support': 18333.0} | {'precision': 0.9387941883079739, 'recall': 0.8315768139440616, 'f1-score': 0.8819388467945618, 'support': 9868.0} | 0.9207 | {'precision': 0.8961355591589334, 'recall': 0.8969890334061811, 'f1-score': 0.8949190682350753, 'support': 29334.0} | {'precision': 0.9216986072911089, 'recall': 0.9206722574486943, 'f1-score': 0.9195998909875767, 'support': 29334.0} |
|
81 |
+
| No log | 10.0 | 410 | 0.3228 | {'precision': 0.8491048593350383, 'recall': 0.8790820829655781, 'f1-score': 0.8638334778837814, 'support': 1133.0} | {'precision': 0.9479900314226893, 'recall': 0.9544537173403153, 'f1-score': 0.9512108939686336, 'support': 18333.0} | {'precision': 0.9123982273523652, 'recall': 0.897142278070531, 'f1-score': 0.9047059424658934, 'support': 9868.0} | 0.9323 | {'precision': 0.9031643727033641, 'recall': 0.9102260261254749, 'f1-score': 0.9065834381061029, 'support': 29334.0} | {'precision': 0.9321975441198576, 'recall': 0.9322629031158383, 'f1-score': 0.9321916850692957, 'support': 29334.0} |
|
82 |
+
| No log | 11.0 | 451 | 0.3397 | {'precision': 0.8524871355060034, 'recall': 0.8773168578993822, 'f1-score': 0.8647237929534581, 'support': 1133.0} | {'precision': 0.941372096765542, 'recall': 0.9572901325478645, 'f1-score': 0.9492643877109477, 'support': 18333.0} | {'precision': 0.9164304461942258, 'recall': 0.8845764085934333, 'f1-score': 0.9002217294900223, 'support': 9868.0} | 0.9297 | {'precision': 0.9034298928219237, 'recall': 0.9063944663468934, 'f1-score': 0.9047366367181428, 'support': 29334.0} | {'precision': 0.9295485858585807, 'recall': 0.9297402331765187, 'f1-score': 0.9295010603371041, 'support': 29334.0} |
|
83 |
+
| No log | 12.0 | 492 | 0.3769 | {'precision': 0.8406040268456376, 'recall': 0.884377758164166, 'f1-score': 0.8619354838709679, 'support': 1133.0} | {'precision': 0.9364758459246648, 'recall': 0.9601265477554137, 'f1-score': 0.9481537342777884, 'support': 18333.0} | {'precision': 0.9215707254440403, 'recall': 0.8728212403729225, 'f1-score': 0.8965337774539398, 'support': 9868.0} | 0.9278 | {'precision': 0.8995501994047809, 'recall': 0.9057751820975007, 'f1-score': 0.9022076652008987, 'support': 29334.0} | {'precision': 0.9277587769971628, 'recall': 0.9278311856548714, 'f1-score': 0.927458601951864, 'support': 29334.0} |
|
84 |
+
| 0.1199 | 13.0 | 533 | 0.4395 | {'precision': 0.8141945773524721, 'recall': 0.9011473962930273, 'f1-score': 0.8554671135316297, 'support': 1133.0} | {'precision': 0.9200891931134619, 'recall': 0.967817596683576, 'f1-score': 0.9433500810803624, 'support': 18333.0} | {'precision': 0.9357662573897226, 'recall': 0.8341102553708958, 'f1-score': 0.8820188598371196, 'support': 9868.0} | 0.9203 | {'precision': 0.8900166759518856, 'recall': 0.9010250827824997, 'f1-score': 0.8936120181497039, 'support': 29334.0} | {'precision': 0.9212728936187097, 'recall': 0.9202631758369128, 'f1-score': 0.9193237671285989, 'support': 29334.0} |
|
85 |
+
| 0.1199 | 14.0 | 574 | 0.4362 | {'precision': 0.8338842975206612, 'recall': 0.8905560458958517, 'f1-score': 0.861288945795988, 'support': 1133.0} | {'precision': 0.9288525106249016, 'recall': 0.9656357388316151, 'f1-score': 0.9468870346598203, 'support': 18333.0} | {'precision': 0.9307225592939878, 'recall': 0.8549858127280098, 'f1-score': 0.8912480853536153, 'support': 9868.0} | 0.9255 | {'precision': 0.8978197891465168, 'recall': 0.9037258658184922, 'f1-score': 0.8998080219364746, 'support': 29334.0} | {'precision': 0.9258135338341278, 'recall': 0.9255130565214427, 'f1-score': 0.9248638606488995, 'support': 29334.0} |
|
86 |
+
| 0.1199 | 15.0 | 615 | 0.4385 | {'precision': 0.8272208638956805, 'recall': 0.8958517210944396, 'f1-score': 0.8601694915254238, 'support': 1133.0} | {'precision': 0.9282784730255548, 'recall': 0.9629629629629629, 'f1-score': 0.9453026692725763, 'support': 18333.0} | {'precision': 0.9263945428539994, 'recall': 0.8532630725577625, 'f1-score': 0.8883262119533682, 'support': 9868.0} | 0.9235 | {'precision': 0.8939646265917448, 'recall': 0.9040259188717217, 'f1-score': 0.8979327909171229, 'support': 29334.0} | {'precision': 0.923741454750616, 'recall': 0.9234676484625349, 'f1-score': 0.9228475124165911, 'support': 29334.0} |
|
87 |
+
| 0.1199 | 16.0 | 656 | 0.4247 | {'precision': 0.8405063291139241, 'recall': 0.8790820829655781, 'f1-score': 0.8593615185504745, 'support': 1133.0} | {'precision': 0.9346315063405316, 'recall': 0.960835651557301, 'f1-score': 0.9475524475524475, 'support': 18333.0} | {'precision': 0.9215222532788647, 'recall': 0.8686663964329144, 'f1-score': 0.8943140323422013, 'support': 9868.0} | 0.9267 | {'precision': 0.8988866962444403, 'recall': 0.9028613769852646, 'f1-score': 0.9004093328150411, 'support': 29334.0} | {'precision': 0.9265860323168638, 'recall': 0.9266721210881571, 'f1-score': 0.926236670506905, 'support': 29334.0} |
|
88 |
+
|
89 |
+
|
90 |
+
### Framework versions
|
91 |
+
|
92 |
+
- Transformers 4.37.2
|
93 |
+
- Pytorch 2.2.0+cu121
|
94 |
+
- Datasets 2.17.0
|
95 |
+
- Tokenizers 0.15.2
|
meta_data/meta_s42_e16_cvi0.json
CHANGED
@@ -1 +1 @@
|
|
1 |
-
{"B": {"precision": 0.
|
|
|
1 |
+
{"B": {"precision": 0.8405063291139241, "recall": 0.8790820829655781, "f1-score": 0.8593615185504745, "support": 1133.0}, "I": {"precision": 0.9346315063405316, "recall": 0.960835651557301, "f1-score": 0.9475524475524475, "support": 18333.0}, "O": {"precision": 0.9215222532788647, "recall": 0.8686663964329144, "f1-score": 0.8943140323422013, "support": 9868.0}, "accuracy": 0.9266721210881571, "macro avg": {"precision": 0.8988866962444403, "recall": 0.9028613769852646, "f1-score": 0.9004093328150411, "support": 29334.0}, "weighted avg": {"precision": 0.9265860323168638, "recall": 0.9266721210881571, "f1-score": 0.926236670506905, "support": 29334.0}}
|
model.safetensors
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 592318676
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:e199d6a94ccd689a64f5603104627d8a3bf189447ab41419f70e3de19b888db4
|
3 |
size 592318676
|