hai2131 commited on
Commit
8270392
·
verified ·
1 Parent(s): 28ffadb

hai2131/abte-bert

Browse files
README.md ADDED
@@ -0,0 +1,162 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ license: apache-2.0
4
+ base_model: google-bert/bert-base-uncased
5
+ tags:
6
+ - generated_from_trainer
7
+ metrics:
8
+ - accuracy
9
+ - f1
10
+ model-index:
11
+ - name: abte-bert
12
+ results: []
13
+ ---
14
+
15
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
16
+ should probably proofread and complete it, then remove this comment. -->
17
+
18
+ # abte-bert
19
+
20
+ This model is a fine-tuned version of [google-bert/bert-base-uncased](https://huggingface.co/google-bert/bert-base-uncased) on an unknown dataset.
21
+ It achieves the following results on the evaluation set:
22
+ - Loss: 0.4438
23
+ - Accuracy: 0.9133
24
+ - F1: 0.9133
25
+
26
+ ## Model description
27
+
28
+ More information needed
29
+
30
+ ## Intended uses & limitations
31
+
32
+ More information needed
33
+
34
+ ## Training and evaluation data
35
+
36
+ More information needed
37
+
38
+ ## Training procedure
39
+
40
+ ### Training hyperparameters
41
+
42
+ The following hyperparameters were used during training:
43
+ - learning_rate: 1e-05
44
+ - train_batch_size: 128
45
+ - eval_batch_size: 128
46
+ - seed: 42
47
+ - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
48
+ - lr_scheduler_type: linear
49
+ - num_epochs: 100
50
+
51
+ ### Training results
52
+
53
+ | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
54
+ |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
55
+ | 0.5077 | 1.0 | 24 | 0.3039 | 0.9166 | 0.9166 |
56
+ | 0.2475 | 2.0 | 48 | 0.2232 | 0.9173 | 0.9173 |
57
+ | 0.2026 | 3.0 | 72 | 0.1930 | 0.9208 | 0.9208 |
58
+ | 0.1792 | 4.0 | 96 | 0.1744 | 0.9234 | 0.9234 |
59
+ | 0.1646 | 5.0 | 120 | 0.1671 | 0.9231 | 0.9231 |
60
+ | 0.155 | 6.0 | 144 | 0.1628 | 0.9229 | 0.9229 |
61
+ | 0.1483 | 7.0 | 168 | 0.1604 | 0.9266 | 0.9266 |
62
+ | 0.1426 | 8.0 | 192 | 0.1588 | 0.9266 | 0.9266 |
63
+ | 0.1383 | 9.0 | 216 | 0.1607 | 0.9264 | 0.9264 |
64
+ | 0.1347 | 10.0 | 240 | 0.1649 | 0.9241 | 0.9241 |
65
+ | 0.1317 | 11.0 | 264 | 0.1651 | 0.9245 | 0.9245 |
66
+ | 0.1277 | 12.0 | 288 | 0.1684 | 0.9242 | 0.9242 |
67
+ | 0.1256 | 13.0 | 312 | 0.1735 | 0.9250 | 0.9250 |
68
+ | 0.1239 | 14.0 | 336 | 0.1794 | 0.9280 | 0.9280 |
69
+ | 0.1223 | 15.0 | 360 | 0.1837 | 0.9235 | 0.9235 |
70
+ | 0.1208 | 16.0 | 384 | 0.1848 | 0.9228 | 0.9228 |
71
+ | 0.119 | 17.0 | 408 | 0.1842 | 0.9265 | 0.9265 |
72
+ | 0.1184 | 18.0 | 432 | 0.1895 | 0.9244 | 0.9244 |
73
+ | 0.1174 | 19.0 | 456 | 0.1996 | 0.9201 | 0.9201 |
74
+ | 0.117 | 20.0 | 480 | 0.1946 | 0.9220 | 0.9220 |
75
+ | 0.1154 | 21.0 | 504 | 0.2086 | 0.9211 | 0.9211 |
76
+ | 0.1141 | 22.0 | 528 | 0.2132 | 0.9212 | 0.9212 |
77
+ | 0.1135 | 23.0 | 552 | 0.2282 | 0.9228 | 0.9228 |
78
+ | 0.1123 | 24.0 | 576 | 0.2226 | 0.9214 | 0.9214 |
79
+ | 0.1121 | 25.0 | 600 | 0.2274 | 0.9225 | 0.9225 |
80
+ | 0.1109 | 26.0 | 624 | 0.2251 | 0.9224 | 0.9224 |
81
+ | 0.111 | 27.0 | 648 | 0.2419 | 0.9186 | 0.9186 |
82
+ | 0.1113 | 28.0 | 672 | 0.2555 | 0.9200 | 0.9200 |
83
+ | 0.1104 | 29.0 | 696 | 0.2439 | 0.9206 | 0.9206 |
84
+ | 0.1097 | 30.0 | 720 | 0.2613 | 0.9187 | 0.9187 |
85
+ | 0.1092 | 31.0 | 744 | 0.2519 | 0.9195 | 0.9195 |
86
+ | 0.1096 | 32.0 | 768 | 0.2539 | 0.9208 | 0.9208 |
87
+ | 0.1092 | 33.0 | 792 | 0.2647 | 0.9231 | 0.9231 |
88
+ | 0.1082 | 34.0 | 816 | 0.2677 | 0.9220 | 0.9220 |
89
+ | 0.1082 | 35.0 | 840 | 0.2693 | 0.9222 | 0.9222 |
90
+ | 0.1087 | 36.0 | 864 | 0.2818 | 0.9201 | 0.9201 |
91
+ | 0.1082 | 37.0 | 888 | 0.2773 | 0.9206 | 0.9206 |
92
+ | 0.1076 | 38.0 | 912 | 0.2882 | 0.9187 | 0.9187 |
93
+ | 0.1067 | 39.0 | 936 | 0.2776 | 0.9199 | 0.9199 |
94
+ | 0.1062 | 40.0 | 960 | 0.2850 | 0.9217 | 0.9217 |
95
+ | 0.1065 | 41.0 | 984 | 0.3098 | 0.9188 | 0.9188 |
96
+ | 0.1061 | 42.0 | 1008 | 0.3019 | 0.9191 | 0.9191 |
97
+ | 0.1065 | 43.0 | 1032 | 0.2936 | 0.9175 | 0.9175 |
98
+ | 0.1065 | 44.0 | 1056 | 0.3130 | 0.9197 | 0.9197 |
99
+ | 0.1056 | 45.0 | 1080 | 0.3119 | 0.9170 | 0.9170 |
100
+ | 0.1056 | 46.0 | 1104 | 0.3273 | 0.9171 | 0.9171 |
101
+ | 0.1057 | 47.0 | 1128 | 0.3195 | 0.9200 | 0.9200 |
102
+ | 0.1056 | 48.0 | 1152 | 0.3272 | 0.9171 | 0.9171 |
103
+ | 0.1046 | 49.0 | 1176 | 0.3276 | 0.9187 | 0.9187 |
104
+ | 0.1049 | 50.0 | 1200 | 0.3476 | 0.9152 | 0.9152 |
105
+ | 0.1043 | 51.0 | 1224 | 0.3510 | 0.9171 | 0.9171 |
106
+ | 0.1045 | 52.0 | 1248 | 0.3377 | 0.9177 | 0.9177 |
107
+ | 0.1046 | 53.0 | 1272 | 0.3232 | 0.9200 | 0.9200 |
108
+ | 0.1045 | 54.0 | 1296 | 0.3487 | 0.9147 | 0.9147 |
109
+ | 0.104 | 55.0 | 1320 | 0.3422 | 0.9183 | 0.9183 |
110
+ | 0.1041 | 56.0 | 1344 | 0.3609 | 0.9182 | 0.9182 |
111
+ | 0.1036 | 57.0 | 1368 | 0.3602 | 0.9172 | 0.9172 |
112
+ | 0.1041 | 58.0 | 1392 | 0.3627 | 0.9163 | 0.9163 |
113
+ | 0.1038 | 59.0 | 1416 | 0.3672 | 0.9132 | 0.9132 |
114
+ | 0.1044 | 60.0 | 1440 | 0.3597 | 0.9163 | 0.9163 |
115
+ | 0.103 | 61.0 | 1464 | 0.3795 | 0.9163 | 0.9163 |
116
+ | 0.104 | 62.0 | 1488 | 0.3635 | 0.9169 | 0.9169 |
117
+ | 0.1034 | 63.0 | 1512 | 0.3777 | 0.9146 | 0.9146 |
118
+ | 0.1033 | 64.0 | 1536 | 0.3772 | 0.9161 | 0.9161 |
119
+ | 0.1037 | 65.0 | 1560 | 0.3925 | 0.9140 | 0.9140 |
120
+ | 0.103 | 66.0 | 1584 | 0.3923 | 0.9157 | 0.9157 |
121
+ | 0.1027 | 67.0 | 1608 | 0.3711 | 0.9178 | 0.9178 |
122
+ | 0.103 | 68.0 | 1632 | 0.4019 | 0.9156 | 0.9156 |
123
+ | 0.1032 | 69.0 | 1656 | 0.3967 | 0.9134 | 0.9134 |
124
+ | 0.1026 | 70.0 | 1680 | 0.4072 | 0.9141 | 0.9141 |
125
+ | 0.1029 | 71.0 | 1704 | 0.4065 | 0.9136 | 0.9136 |
126
+ | 0.1023 | 72.0 | 1728 | 0.3933 | 0.9171 | 0.9171 |
127
+ | 0.1024 | 73.0 | 1752 | 0.4131 | 0.9109 | 0.9109 |
128
+ | 0.1029 | 74.0 | 1776 | 0.4001 | 0.9150 | 0.9150 |
129
+ | 0.1018 | 75.0 | 1800 | 0.4171 | 0.9132 | 0.9132 |
130
+ | 0.1022 | 76.0 | 1824 | 0.4151 | 0.9144 | 0.9144 |
131
+ | 0.1025 | 77.0 | 1848 | 0.4194 | 0.9149 | 0.9149 |
132
+ | 0.1022 | 78.0 | 1872 | 0.4238 | 0.9132 | 0.9132 |
133
+ | 0.1021 | 79.0 | 1896 | 0.4328 | 0.9133 | 0.9133 |
134
+ | 0.102 | 80.0 | 1920 | 0.4241 | 0.9113 | 0.9113 |
135
+ | 0.1023 | 81.0 | 1944 | 0.4214 | 0.9146 | 0.9146 |
136
+ | 0.1023 | 82.0 | 1968 | 0.4324 | 0.9136 | 0.9136 |
137
+ | 0.1021 | 83.0 | 1992 | 0.4251 | 0.9153 | 0.9153 |
138
+ | 0.1017 | 84.0 | 2016 | 0.4366 | 0.9138 | 0.9138 |
139
+ | 0.1017 | 85.0 | 2040 | 0.4405 | 0.9135 | 0.9135 |
140
+ | 0.1021 | 86.0 | 2064 | 0.4337 | 0.9156 | 0.9156 |
141
+ | 0.1019 | 87.0 | 2088 | 0.4343 | 0.9130 | 0.9130 |
142
+ | 0.1021 | 88.0 | 2112 | 0.4360 | 0.9145 | 0.9145 |
143
+ | 0.1018 | 89.0 | 2136 | 0.4425 | 0.9143 | 0.9143 |
144
+ | 0.1014 | 90.0 | 2160 | 0.4438 | 0.9131 | 0.9131 |
145
+ | 0.1017 | 91.0 | 2184 | 0.4409 | 0.9128 | 0.9128 |
146
+ | 0.1018 | 92.0 | 2208 | 0.4402 | 0.9136 | 0.9136 |
147
+ | 0.1015 | 93.0 | 2232 | 0.4432 | 0.9131 | 0.9131 |
148
+ | 0.1016 | 94.0 | 2256 | 0.4453 | 0.9126 | 0.9126 |
149
+ | 0.1017 | 95.0 | 2280 | 0.4495 | 0.9139 | 0.9139 |
150
+ | 0.1016 | 96.0 | 2304 | 0.4465 | 0.9135 | 0.9135 |
151
+ | 0.1019 | 97.0 | 2328 | 0.4433 | 0.9134 | 0.9134 |
152
+ | 0.1016 | 98.0 | 2352 | 0.4439 | 0.9128 | 0.9128 |
153
+ | 0.102 | 99.0 | 2376 | 0.4432 | 0.9134 | 0.9134 |
154
+ | 0.1014 | 100.0 | 2400 | 0.4438 | 0.9133 | 0.9133 |
155
+
156
+
157
+ ### Framework versions
158
+
159
+ - Transformers 4.51.3
160
+ - Pytorch 2.6.0+cu124
161
+ - Datasets 3.5.1
162
+ - Tokenizers 0.21.1
config.json ADDED
@@ -0,0 +1,35 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "BertForTokenClassification"
4
+ ],
5
+ "attention_probs_dropout_prob": 0.1,
6
+ "classifier_dropout": null,
7
+ "gradient_checkpointing": false,
8
+ "hidden_act": "gelu",
9
+ "hidden_dropout_prob": 0.1,
10
+ "hidden_size": 768,
11
+ "id2label": {
12
+ "0": "O",
13
+ "1": "B-Term",
14
+ "2": "I-Term"
15
+ },
16
+ "initializer_range": 0.02,
17
+ "intermediate_size": 3072,
18
+ "label2id": {
19
+ "B-Term": 1,
20
+ "I-Term": 2,
21
+ "O": 0
22
+ },
23
+ "layer_norm_eps": 1e-12,
24
+ "max_position_embeddings": 512,
25
+ "model_type": "bert",
26
+ "num_attention_heads": 12,
27
+ "num_hidden_layers": 12,
28
+ "pad_token_id": 0,
29
+ "position_embedding_type": "absolute",
30
+ "torch_dtype": "float32",
31
+ "transformers_version": "4.51.3",
32
+ "type_vocab_size": 2,
33
+ "use_cache": true,
34
+ "vocab_size": 30522
35
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b0fc6aac895c736f69195ed8d9575c6cf3984291efab9207cc36f28f2bf293fb
3
+ size 435599164
runs/May01_03-46-39_0fb7ae286e2b/events.out.tfevents.1746071247.0fb7ae286e2b.504.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ae2809287af70ec8f46b72335406d0c928a7728cff0a9b7a172b63133ea5fd50
3
+ size 63373
special_tokens_map.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "cls_token": "[CLS]",
3
+ "mask_token": "[MASK]",
4
+ "pad_token": "[PAD]",
5
+ "sep_token": "[SEP]",
6
+ "unk_token": "[UNK]"
7
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,56 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "0": {
4
+ "content": "[PAD]",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "100": {
12
+ "content": "[UNK]",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ },
19
+ "101": {
20
+ "content": "[CLS]",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ },
27
+ "102": {
28
+ "content": "[SEP]",
29
+ "lstrip": false,
30
+ "normalized": false,
31
+ "rstrip": false,
32
+ "single_word": false,
33
+ "special": true
34
+ },
35
+ "103": {
36
+ "content": "[MASK]",
37
+ "lstrip": false,
38
+ "normalized": false,
39
+ "rstrip": false,
40
+ "single_word": false,
41
+ "special": true
42
+ }
43
+ },
44
+ "clean_up_tokenization_spaces": false,
45
+ "cls_token": "[CLS]",
46
+ "do_lower_case": true,
47
+ "extra_special_tokens": {},
48
+ "mask_token": "[MASK]",
49
+ "model_max_length": 512,
50
+ "pad_token": "[PAD]",
51
+ "sep_token": "[SEP]",
52
+ "strip_accents": null,
53
+ "tokenize_chinese_chars": true,
54
+ "tokenizer_class": "BertTokenizer",
55
+ "unk_token": "[UNK]"
56
+ }
training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:057e06400d2299e5e8d5ed9408f3c9c82c7f6c6f857b57cf0be8e3cacfe347b4
3
+ size 5304
vocab.txt ADDED
The diff for this file is too large to render. See raw diff