mircoboettcher commited on
Commit
d7c5acf
·
verified ·
1 Parent(s): 6aa3317

End of training

Browse files
README.md ADDED
@@ -0,0 +1,95 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ license: mit
4
+ base_model: dslim/bert-base-NER
5
+ tags:
6
+ - generated_from_trainer
7
+ datasets:
8
+ - wnut_17
9
+ metrics:
10
+ - precision
11
+ - recall
12
+ - f1
13
+ - accuracy
14
+ model-index:
15
+ - name: bert-wnut17-final
16
+ results:
17
+ - task:
18
+ name: Token Classification
19
+ type: token-classification
20
+ dataset:
21
+ name: wnut_17
22
+ type: wnut_17
23
+ config: wnut_17
24
+ split: test
25
+ args: wnut_17
26
+ metrics:
27
+ - name: Precision
28
+ type: precision
29
+ value: 0.5603799185888738
30
+ - name: Recall
31
+ type: recall
32
+ value: 0.3827618164967563
33
+ - name: F1
34
+ type: f1
35
+ value: 0.45484581497797355
36
+ - name: Accuracy
37
+ type: accuracy
38
+ value: 0.9482345900658289
39
+ ---
40
+
41
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
42
+ should probably proofread and complete it, then remove this comment. -->
43
+
44
+ # bert-wnut17-final
45
+
46
+ This model is a fine-tuned version of [dslim/bert-base-NER](https://huggingface.co/dslim/bert-base-NER) on the wnut_17 dataset.
47
+ It achieves the following results on the evaluation set:
48
+ - Loss: 0.3245
49
+ - Precision: 0.5604
50
+ - Recall: 0.3828
51
+ - F1: 0.4548
52
+ - Accuracy: 0.9482
53
+
54
+ ## Model description
55
+
56
+ More information needed
57
+
58
+ ## Intended uses & limitations
59
+
60
+ More information needed
61
+
62
+ ## Training and evaluation data
63
+
64
+ More information needed
65
+
66
+ ## Training procedure
67
+
68
+ ### Training hyperparameters
69
+
70
+ The following hyperparameters were used during training:
71
+ - learning_rate: 3.4590617775212224e-05
72
+ - train_batch_size: 16
73
+ - eval_batch_size: 16
74
+ - seed: 42
75
+ - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
76
+ - lr_scheduler_type: linear
77
+ - lr_scheduler_warmup_ratio: 0.1
78
+ - num_epochs: 4
79
+
80
+ ### Training results
81
+
82
+ | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
83
+ |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
84
+ | No log | 1.0 | 213 | 0.2392 | 0.5203 | 0.4041 | 0.4549 | 0.9462 |
85
+ | No log | 2.0 | 426 | 0.2932 | 0.5818 | 0.3494 | 0.4366 | 0.9459 |
86
+ | 0.1758 | 3.0 | 639 | 0.3100 | 0.5768 | 0.3828 | 0.4602 | 0.9478 |
87
+ | 0.1758 | 4.0 | 852 | 0.3245 | 0.5604 | 0.3828 | 0.4548 | 0.9482 |
88
+
89
+
90
+ ### Framework versions
91
+
92
+ - Transformers 4.47.1
93
+ - Pytorch 2.5.1+cu121
94
+ - Datasets 3.2.0
95
+ - Tokenizers 0.21.0
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:29414aae38078d9dee2b36d738330d0c2b31369ce9d870c9a4ff9fe0994fa073
3
  size 430942044
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:81c7ca4caf7e73ebbb1fb74de0f27c42f0426be4dfff2b346d9f640ecc3baa31
3
  size 430942044
runs/Jan15_14-36-33_49cc5c9ac2cb/events.out.tfevents.1736951796.49cc5c9ac2cb.768.24 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:572a753d71d226550876435c7aaf4c54149b1c526f83cce31d8104c6435f8859
3
- size 7404
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d74f5911bdbbfbb1c55c6a0a8d6ee4362e1f12ab871e281e7e0dd84608a14b91
3
+ size 8230