Alijeff1214 commited on
Commit
dfda3f1
·
verified ·
1 Parent(s): 080901d

Upload folder using huggingface_hub

Browse files
Files changed (1) hide show
  1. FineTune_withPlots32k1115474.out +1071 -0
FineTune_withPlots32k1115474.out ADDED
@@ -0,0 +1,1071 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Loading pytorch-gpu/py3/2.1.1
2
+ Loading requirement: cuda/11.8.0 nccl/2.18.5-1-cuda cudnn/8.7.0.84-cuda
3
+ gcc/8.5.0 openmpi/4.1.5-cuda intel-mkl/2020.4 magma/2.7.1-cuda sox/14.4.2
4
+ sparsehash/2.0.3 libjpeg-turbo/2.1.3 ffmpeg/4.4.4
5
+ + HF_DATASETS_OFFLINE=1
6
+ + TRANSFORMERS_OFFLINE=1
7
+ + python3 FIneTune_withPlots.py
8
+
9
+ Checking label assignment:
10
+
11
+ Domain: Mathematics
12
+ Categories: math.OA math.PR
13
+ Abstract: we study the distributional behavior for products and for sums of boolean independent random variabl...
14
+
15
+ Domain: Computer Science
16
+ Categories: cs.CL physics.soc-ph
17
+ Abstract: zipfs law states that if words of language are ranked in the order of decreasing frequency in texts ...
18
+
19
+ Domain: Physics
20
+ Categories: physics.atom-ph
21
+ Abstract: the effects of parity and time reversal violating potential in particular the tensorpseudotensor ele...
22
+
23
+ Domain: Chemistry
24
+ Categories: nlin.AO
25
+ Abstract: over a period of approximately five years pankaj ghemawat of harvard business school and daniel levi...
26
+
27
+ Domain: Statistics
28
+ Categories: stat.AP
29
+ Abstract: we consider data consisting of photon counts of diffracted xray radiation as a function of the angle...
30
+
31
+ Domain: Biology
32
+ Categories: q-bio.PE q-bio.GN
33
+ Abstract: this paper develops simplified mathematical models describing the mutationselection balance for the ...
34
+ /linkhome/rech/genrug01/uft12cr/.local/lib/python3.11/site-packages/transformers/tokenization_utils_base.py:2057: FutureWarning: Calling BertTokenizer.from_pretrained() with the path to a single file or url is deprecated and won't be possible anymore in v5. Use a model identifier or the path to a directory instead.
35
+ warnings.warn(
36
+
37
+ Training with All Cluster tokenizer:
38
+ Vocabulary size: 29376
39
+ Could not load pretrained weights from /gpfswork/rech/fmr/uft12cr/finetuneAli/Bert_Model. Starting with random weights. Error: Error while deserializing header: HeaderTooLarge
40
+ Initialized model with vocabulary size: 29376
41
+ /gpfsdswork/projects/rech/fmr/uft12cr/finetuneAli/FIneTune_withPlots.py:173: FutureWarning: `torch.cuda.amp.GradScaler(args...)` is deprecated. Please use `torch.amp.GradScaler('cuda', args...)` instead.
42
+ scaler = amp.GradScaler()
43
+ Batch 0:
44
+ input_ids shape: torch.Size([16, 256])
45
+ attention_mask shape: torch.Size([16, 256])
46
+ labels shape: torch.Size([16])
47
+ input_ids max value: 29374
48
+ Vocab size: 29376
49
+ /gpfsdswork/projects/rech/fmr/uft12cr/finetuneAli/FIneTune_withPlots.py:202: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cuda', args...)` instead.
50
+ with amp.autocast():
51
+ Batch 100:
52
+ input_ids shape: torch.Size([16, 256])
53
+ attention_mask shape: torch.Size([16, 256])
54
+ labels shape: torch.Size([16])
55
+ input_ids max value: 29374
56
+ Vocab size: 29376
57
+ Batch 200:
58
+ input_ids shape: torch.Size([16, 256])
59
+ attention_mask shape: torch.Size([16, 256])
60
+ labels shape: torch.Size([16])
61
+ input_ids max value: 29374
62
+ Vocab size: 29376
63
+ Batch 300:
64
+ input_ids shape: torch.Size([16, 256])
65
+ attention_mask shape: torch.Size([16, 256])
66
+ labels shape: torch.Size([16])
67
+ input_ids max value: 29374
68
+ Vocab size: 29376
69
+ Batch 400:
70
+ input_ids shape: torch.Size([16, 256])
71
+ attention_mask shape: torch.Size([16, 256])
72
+ labels shape: torch.Size([16])
73
+ input_ids max value: 29374
74
+ Vocab size: 29376
75
+ Batch 500:
76
+ input_ids shape: torch.Size([16, 256])
77
+ attention_mask shape: torch.Size([16, 256])
78
+ labels shape: torch.Size([16])
79
+ input_ids max value: 29374
80
+ Vocab size: 29376
81
+ Batch 600:
82
+ input_ids shape: torch.Size([16, 256])
83
+ attention_mask shape: torch.Size([16, 256])
84
+ labels shape: torch.Size([16])
85
+ input_ids max value: 29374
86
+ Vocab size: 29376
87
+ Batch 700:
88
+ input_ids shape: torch.Size([16, 256])
89
+ attention_mask shape: torch.Size([16, 256])
90
+ labels shape: torch.Size([16])
91
+ input_ids max value: 29374
92
+ Vocab size: 29376
93
+ Batch 800:
94
+ input_ids shape: torch.Size([16, 256])
95
+ attention_mask shape: torch.Size([16, 256])
96
+ labels shape: torch.Size([16])
97
+ input_ids max value: 29374
98
+ Vocab size: 29376
99
+ Batch 900:
100
+ input_ids shape: torch.Size([16, 256])
101
+ attention_mask shape: torch.Size([16, 256])
102
+ labels shape: torch.Size([16])
103
+ input_ids max value: 29374
104
+ Vocab size: 29376
105
+ Epoch 1/5:
106
+ Train Loss: 0.8540, Train Accuracy: 0.7226
107
+ Val Loss: 0.6542, Val Accuracy: 0.7833, Val F1: 0.7250
108
+ Batch 0:
109
+ input_ids shape: torch.Size([16, 256])
110
+ attention_mask shape: torch.Size([16, 256])
111
+ labels shape: torch.Size([16])
112
+ input_ids max value: 29374
113
+ Vocab size: 29376
114
+ /gpfsdswork/projects/rech/fmr/uft12cr/finetuneAli/FIneTune_withPlots.py:202: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cuda', args...)` instead.
115
+ with amp.autocast():
116
+ Batch 100:
117
+ input_ids shape: torch.Size([16, 256])
118
+ attention_mask shape: torch.Size([16, 256])
119
+ labels shape: torch.Size([16])
120
+ input_ids max value: 29374
121
+ Vocab size: 29376
122
+ Batch 200:
123
+ input_ids shape: torch.Size([16, 256])
124
+ attention_mask shape: torch.Size([16, 256])
125
+ labels shape: torch.Size([16])
126
+ input_ids max value: 29374
127
+ Vocab size: 29376
128
+ Batch 300:
129
+ input_ids shape: torch.Size([16, 256])
130
+ attention_mask shape: torch.Size([16, 256])
131
+ labels shape: torch.Size([16])
132
+ input_ids max value: 29374
133
+ Vocab size: 29376
134
+ Batch 400:
135
+ input_ids shape: torch.Size([16, 256])
136
+ attention_mask shape: torch.Size([16, 256])
137
+ labels shape: torch.Size([16])
138
+ input_ids max value: 29374
139
+ Vocab size: 29376
140
+ Batch 500:
141
+ input_ids shape: torch.Size([16, 256])
142
+ attention_mask shape: torch.Size([16, 256])
143
+ labels shape: torch.Size([16])
144
+ input_ids max value: 29374
145
+ Vocab size: 29376
146
+ Batch 600:
147
+ input_ids shape: torch.Size([16, 256])
148
+ attention_mask shape: torch.Size([16, 256])
149
+ labels shape: torch.Size([16])
150
+ input_ids max value: 29374
151
+ Vocab size: 29376
152
+ Batch 700:
153
+ input_ids shape: torch.Size([16, 256])
154
+ attention_mask shape: torch.Size([16, 256])
155
+ labels shape: torch.Size([16])
156
+ input_ids max value: 29374
157
+ Vocab size: 29376
158
+ Batch 800:
159
+ input_ids shape: torch.Size([16, 256])
160
+ attention_mask shape: torch.Size([16, 256])
161
+ labels shape: torch.Size([16])
162
+ input_ids max value: 29374
163
+ Vocab size: 29376
164
+ Batch 900:
165
+ input_ids shape: torch.Size([16, 256])
166
+ attention_mask shape: torch.Size([16, 256])
167
+ labels shape: torch.Size([16])
168
+ input_ids max value: 29374
169
+ Vocab size: 29376
170
+ Epoch 2/5:
171
+ Train Loss: 0.6120, Train Accuracy: 0.8040
172
+ Val Loss: 0.6541, Val Accuracy: 0.7765, Val F1: 0.7610
173
+ Batch 0:
174
+ input_ids shape: torch.Size([16, 256])
175
+ attention_mask shape: torch.Size([16, 256])
176
+ labels shape: torch.Size([16])
177
+ input_ids max value: 29374
178
+ Vocab size: 29376
179
+ /gpfsdswork/projects/rech/fmr/uft12cr/finetuneAli/FIneTune_withPlots.py:202: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cuda', args...)` instead.
180
+ with amp.autocast():
181
+ Batch 100:
182
+ input_ids shape: torch.Size([16, 256])
183
+ attention_mask shape: torch.Size([16, 256])
184
+ labels shape: torch.Size([16])
185
+ input_ids max value: 29374
186
+ Vocab size: 29376
187
+ Batch 200:
188
+ input_ids shape: torch.Size([16, 256])
189
+ attention_mask shape: torch.Size([16, 256])
190
+ labels shape: torch.Size([16])
191
+ input_ids max value: 29374
192
+ Vocab size: 29376
193
+ Batch 300:
194
+ input_ids shape: torch.Size([16, 256])
195
+ attention_mask shape: torch.Size([16, 256])
196
+ labels shape: torch.Size([16])
197
+ input_ids max value: 29374
198
+ Vocab size: 29376
199
+ Batch 400:
200
+ input_ids shape: torch.Size([16, 256])
201
+ attention_mask shape: torch.Size([16, 256])
202
+ labels shape: torch.Size([16])
203
+ input_ids max value: 29374
204
+ Vocab size: 29376
205
+ Batch 500:
206
+ input_ids shape: torch.Size([16, 256])
207
+ attention_mask shape: torch.Size([16, 256])
208
+ labels shape: torch.Size([16])
209
+ input_ids max value: 29374
210
+ Vocab size: 29376
211
+ Batch 600:
212
+ input_ids shape: torch.Size([16, 256])
213
+ attention_mask shape: torch.Size([16, 256])
214
+ labels shape: torch.Size([16])
215
+ input_ids max value: 29374
216
+ Vocab size: 29376
217
+ Batch 700:
218
+ input_ids shape: torch.Size([16, 256])
219
+ attention_mask shape: torch.Size([16, 256])
220
+ labels shape: torch.Size([16])
221
+ input_ids max value: 29374
222
+ Vocab size: 29376
223
+ Batch 800:
224
+ input_ids shape: torch.Size([16, 256])
225
+ attention_mask shape: torch.Size([16, 256])
226
+ labels shape: torch.Size([16])
227
+ input_ids max value: 29374
228
+ Vocab size: 29376
229
+ Batch 900:
230
+ input_ids shape: torch.Size([16, 256])
231
+ attention_mask shape: torch.Size([16, 256])
232
+ labels shape: torch.Size([16])
233
+ input_ids max value: 29374
234
+ Vocab size: 29376
235
+ Epoch 3/5:
236
+ Train Loss: 0.5221, Train Accuracy: 0.8347
237
+ Val Loss: 0.6959, Val Accuracy: 0.7582, Val F1: 0.7540
238
+ Batch 0:
239
+ input_ids shape: torch.Size([16, 256])
240
+ attention_mask shape: torch.Size([16, 256])
241
+ labels shape: torch.Size([16])
242
+ input_ids max value: 29374
243
+ Vocab size: 29376
244
+ /gpfsdswork/projects/rech/fmr/uft12cr/finetuneAli/FIneTune_withPlots.py:202: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cuda', args...)` instead.
245
+ with amp.autocast():
246
+ Batch 100:
247
+ input_ids shape: torch.Size([16, 256])
248
+ attention_mask shape: torch.Size([16, 256])
249
+ labels shape: torch.Size([16])
250
+ input_ids max value: 29374
251
+ Vocab size: 29376
252
+ Batch 200:
253
+ input_ids shape: torch.Size([16, 256])
254
+ attention_mask shape: torch.Size([16, 256])
255
+ labels shape: torch.Size([16])
256
+ input_ids max value: 29374
257
+ Vocab size: 29376
258
+ Batch 300:
259
+ input_ids shape: torch.Size([16, 256])
260
+ attention_mask shape: torch.Size([16, 256])
261
+ labels shape: torch.Size([16])
262
+ input_ids max value: 29374
263
+ Vocab size: 29376
264
+ Batch 400:
265
+ input_ids shape: torch.Size([16, 256])
266
+ attention_mask shape: torch.Size([16, 256])
267
+ labels shape: torch.Size([16])
268
+ input_ids max value: 29374
269
+ Vocab size: 29376
270
+ Batch 500:
271
+ input_ids shape: torch.Size([16, 256])
272
+ attention_mask shape: torch.Size([16, 256])
273
+ labels shape: torch.Size([16])
274
+ input_ids max value: 29374
275
+ Vocab size: 29376
276
+ Batch 600:
277
+ input_ids shape: torch.Size([16, 256])
278
+ attention_mask shape: torch.Size([16, 256])
279
+ labels shape: torch.Size([16])
280
+ input_ids max value: 29374
281
+ Vocab size: 29376
282
+ Batch 700:
283
+ input_ids shape: torch.Size([16, 256])
284
+ attention_mask shape: torch.Size([16, 256])
285
+ labels shape: torch.Size([16])
286
+ input_ids max value: 29374
287
+ Vocab size: 29376
288
+ Batch 800:
289
+ input_ids shape: torch.Size([16, 256])
290
+ attention_mask shape: torch.Size([16, 256])
291
+ labels shape: torch.Size([16])
292
+ input_ids max value: 29374
293
+ Vocab size: 29376
294
+ Batch 900:
295
+ input_ids shape: torch.Size([16, 256])
296
+ attention_mask shape: torch.Size([16, 256])
297
+ labels shape: torch.Size([16])
298
+ input_ids max value: 29374
299
+ Vocab size: 29376
300
+ Epoch 4/5:
301
+ Train Loss: 0.4214, Train Accuracy: 0.8676
302
+ Val Loss: 0.5618, Val Accuracy: 0.8204, Val F1: 0.7935
303
+ Batch 0:
304
+ input_ids shape: torch.Size([16, 256])
305
+ attention_mask shape: torch.Size([16, 256])
306
+ labels shape: torch.Size([16])
307
+ input_ids max value: 29374
308
+ Vocab size: 29376
309
+ /gpfsdswork/projects/rech/fmr/uft12cr/finetuneAli/FIneTune_withPlots.py:202: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cuda', args...)` instead.
310
+ with amp.autocast():
311
+ Batch 100:
312
+ input_ids shape: torch.Size([16, 256])
313
+ attention_mask shape: torch.Size([16, 256])
314
+ labels shape: torch.Size([16])
315
+ input_ids max value: 29374
316
+ Vocab size: 29376
317
+ Batch 200:
318
+ input_ids shape: torch.Size([16, 256])
319
+ attention_mask shape: torch.Size([16, 256])
320
+ labels shape: torch.Size([16])
321
+ input_ids max value: 29374
322
+ Vocab size: 29376
323
+ Batch 300:
324
+ input_ids shape: torch.Size([16, 256])
325
+ attention_mask shape: torch.Size([16, 256])
326
+ labels shape: torch.Size([16])
327
+ input_ids max value: 29374
328
+ Vocab size: 29376
329
+ Batch 400:
330
+ input_ids shape: torch.Size([16, 256])
331
+ attention_mask shape: torch.Size([16, 256])
332
+ labels shape: torch.Size([16])
333
+ input_ids max value: 29374
334
+ Vocab size: 29376
335
+ Batch 500:
336
+ input_ids shape: torch.Size([16, 256])
337
+ attention_mask shape: torch.Size([16, 256])
338
+ labels shape: torch.Size([16])
339
+ input_ids max value: 29374
340
+ Vocab size: 29376
341
+ Batch 600:
342
+ input_ids shape: torch.Size([16, 256])
343
+ attention_mask shape: torch.Size([16, 256])
344
+ labels shape: torch.Size([16])
345
+ input_ids max value: 29374
346
+ Vocab size: 29376
347
+ Batch 700:
348
+ input_ids shape: torch.Size([16, 256])
349
+ attention_mask shape: torch.Size([16, 256])
350
+ labels shape: torch.Size([16])
351
+ input_ids max value: 29374
352
+ Vocab size: 29376
353
+ Batch 800:
354
+ input_ids shape: torch.Size([16, 256])
355
+ attention_mask shape: torch.Size([16, 256])
356
+ labels shape: torch.Size([16])
357
+ input_ids max value: 29374
358
+ Vocab size: 29376
359
+ Batch 900:
360
+ input_ids shape: torch.Size([16, 256])
361
+ attention_mask shape: torch.Size([16, 256])
362
+ labels shape: torch.Size([16])
363
+ input_ids max value: 29374
364
+ Vocab size: 29376
365
+ Epoch 5/5:
366
+ Train Loss: 0.3263, Train Accuracy: 0.8953
367
+ Val Loss: 0.5990, Val Accuracy: 0.8125, Val F1: 0.8073
368
+
369
+ Test Results for All Cluster tokenizer:
370
+ Accuracy: 0.8125
371
+ F1 Score: 0.8071
372
+ AUC-ROC: 0.8733
373
+
374
+ Training with Final tokenizer:
375
+ Vocabulary size: 27998
376
+ Could not load pretrained weights from /gpfswork/rech/fmr/uft12cr/finetuneAli/Bert_Model. Starting with random weights. Error: Error while deserializing header: HeaderTooLarge
377
+ Initialized model with vocabulary size: 27998
378
+ /gpfsdswork/projects/rech/fmr/uft12cr/finetuneAli/FIneTune_withPlots.py:173: FutureWarning: `torch.cuda.amp.GradScaler(args...)` is deprecated. Please use `torch.amp.GradScaler('cuda', args...)` instead.
379
+ scaler = amp.GradScaler()
380
+ Batch 0:
381
+ input_ids shape: torch.Size([16, 256])
382
+ attention_mask shape: torch.Size([16, 256])
383
+ labels shape: torch.Size([16])
384
+ input_ids max value: 27997
385
+ Vocab size: 27998
386
+ /gpfsdswork/projects/rech/fmr/uft12cr/finetuneAli/FIneTune_withPlots.py:202: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cuda', args...)` instead.
387
+ with amp.autocast():
388
+ Batch 100:
389
+ input_ids shape: torch.Size([16, 256])
390
+ attention_mask shape: torch.Size([16, 256])
391
+ labels shape: torch.Size([16])
392
+ input_ids max value: 27997
393
+ Vocab size: 27998
394
+ Batch 200:
395
+ input_ids shape: torch.Size([16, 256])
396
+ attention_mask shape: torch.Size([16, 256])
397
+ labels shape: torch.Size([16])
398
+ input_ids max value: 27997
399
+ Vocab size: 27998
400
+ Batch 300:
401
+ input_ids shape: torch.Size([16, 256])
402
+ attention_mask shape: torch.Size([16, 256])
403
+ labels shape: torch.Size([16])
404
+ input_ids max value: 27997
405
+ Vocab size: 27998
406
+ Batch 400:
407
+ input_ids shape: torch.Size([16, 256])
408
+ attention_mask shape: torch.Size([16, 256])
409
+ labels shape: torch.Size([16])
410
+ input_ids max value: 27997
411
+ Vocab size: 27998
412
+ Batch 500:
413
+ input_ids shape: torch.Size([16, 256])
414
+ attention_mask shape: torch.Size([16, 256])
415
+ labels shape: torch.Size([16])
416
+ input_ids max value: 27997
417
+ Vocab size: 27998
418
+ Batch 600:
419
+ input_ids shape: torch.Size([16, 256])
420
+ attention_mask shape: torch.Size([16, 256])
421
+ labels shape: torch.Size([16])
422
+ input_ids max value: 27997
423
+ Vocab size: 27998
424
+ Batch 700:
425
+ input_ids shape: torch.Size([16, 256])
426
+ attention_mask shape: torch.Size([16, 256])
427
+ labels shape: torch.Size([16])
428
+ input_ids max value: 27997
429
+ Vocab size: 27998
430
+ Batch 800:
431
+ input_ids shape: torch.Size([16, 256])
432
+ attention_mask shape: torch.Size([16, 256])
433
+ labels shape: torch.Size([16])
434
+ input_ids max value: 27997
435
+ Vocab size: 27998
436
+ Batch 900:
437
+ input_ids shape: torch.Size([16, 256])
438
+ attention_mask shape: torch.Size([16, 256])
439
+ labels shape: torch.Size([16])
440
+ input_ids max value: 27997
441
+ Vocab size: 27998
442
+ Epoch 1/5:
443
+ Train Loss: 0.8917, Train Accuracy: 0.7102
444
+ Val Loss: 0.7550, Val Accuracy: 0.7533, Val F1: 0.7130
445
+ Batch 0:
446
+ input_ids shape: torch.Size([16, 256])
447
+ attention_mask shape: torch.Size([16, 256])
448
+ labels shape: torch.Size([16])
449
+ input_ids max value: 27997
450
+ Vocab size: 27998
451
+ /gpfsdswork/projects/rech/fmr/uft12cr/finetuneAli/FIneTune_withPlots.py:202: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cuda', args...)` instead.
452
+ with amp.autocast():
453
+ Batch 100:
454
+ input_ids shape: torch.Size([16, 256])
455
+ attention_mask shape: torch.Size([16, 256])
456
+ labels shape: torch.Size([16])
457
+ input_ids max value: 27997
458
+ Vocab size: 27998
459
+ Batch 200:
460
+ input_ids shape: torch.Size([16, 256])
461
+ attention_mask shape: torch.Size([16, 256])
462
+ labels shape: torch.Size([16])
463
+ input_ids max value: 27997
464
+ Vocab size: 27998
465
+ Batch 300:
466
+ input_ids shape: torch.Size([16, 256])
467
+ attention_mask shape: torch.Size([16, 256])
468
+ labels shape: torch.Size([16])
469
+ input_ids max value: 27997
470
+ Vocab size: 27998
471
+ Batch 400:
472
+ input_ids shape: torch.Size([16, 256])
473
+ attention_mask shape: torch.Size([16, 256])
474
+ labels shape: torch.Size([16])
475
+ input_ids max value: 27997
476
+ Vocab size: 27998
477
+ Batch 500:
478
+ input_ids shape: torch.Size([16, 256])
479
+ attention_mask shape: torch.Size([16, 256])
480
+ labels shape: torch.Size([16])
481
+ input_ids max value: 27997
482
+ Vocab size: 27998
483
+ Batch 600:
484
+ input_ids shape: torch.Size([16, 256])
485
+ attention_mask shape: torch.Size([16, 256])
486
+ labels shape: torch.Size([16])
487
+ input_ids max value: 27997
488
+ Vocab size: 27998
489
+ Batch 700:
490
+ input_ids shape: torch.Size([16, 256])
491
+ attention_mask shape: torch.Size([16, 256])
492
+ labels shape: torch.Size([16])
493
+ input_ids max value: 27997
494
+ Vocab size: 27998
495
+ Batch 800:
496
+ input_ids shape: torch.Size([16, 256])
497
+ attention_mask shape: torch.Size([16, 256])
498
+ labels shape: torch.Size([16])
499
+ input_ids max value: 27997
500
+ Vocab size: 27998
501
+ Batch 900:
502
+ input_ids shape: torch.Size([16, 256])
503
+ attention_mask shape: torch.Size([16, 256])
504
+ labels shape: torch.Size([16])
505
+ input_ids max value: 27997
506
+ Vocab size: 27998
507
+ Epoch 2/5:
508
+ Train Loss: 0.6483, Train Accuracy: 0.7855
509
+ Val Loss: 0.6702, Val Accuracy: 0.7822, Val F1: 0.7506
510
+ Batch 0:
511
+ input_ids shape: torch.Size([16, 256])
512
+ attention_mask shape: torch.Size([16, 256])
513
+ labels shape: torch.Size([16])
514
+ input_ids max value: 27997
515
+ Vocab size: 27998
516
+ /gpfsdswork/projects/rech/fmr/uft12cr/finetuneAli/FIneTune_withPlots.py:202: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cuda', args...)` instead.
517
+ with amp.autocast():
518
+ Batch 100:
519
+ input_ids shape: torch.Size([16, 256])
520
+ attention_mask shape: torch.Size([16, 256])
521
+ labels shape: torch.Size([16])
522
+ input_ids max value: 27997
523
+ Vocab size: 27998
524
+ Batch 200:
525
+ input_ids shape: torch.Size([16, 256])
526
+ attention_mask shape: torch.Size([16, 256])
527
+ labels shape: torch.Size([16])
528
+ input_ids max value: 27997
529
+ Vocab size: 27998
530
+ Batch 300:
531
+ input_ids shape: torch.Size([16, 256])
532
+ attention_mask shape: torch.Size([16, 256])
533
+ labels shape: torch.Size([16])
534
+ input_ids max value: 27997
535
+ Vocab size: 27998
536
+ Batch 400:
537
+ input_ids shape: torch.Size([16, 256])
538
+ attention_mask shape: torch.Size([16, 256])
539
+ labels shape: torch.Size([16])
540
+ input_ids max value: 27997
541
+ Vocab size: 27998
542
+ Batch 500:
543
+ input_ids shape: torch.Size([16, 256])
544
+ attention_mask shape: torch.Size([16, 256])
545
+ labels shape: torch.Size([16])
546
+ input_ids max value: 27997
547
+ Vocab size: 27998
548
+ Batch 600:
549
+ input_ids shape: torch.Size([16, 256])
550
+ attention_mask shape: torch.Size([16, 256])
551
+ labels shape: torch.Size([16])
552
+ input_ids max value: 27997
553
+ Vocab size: 27998
554
+ Batch 700:
555
+ input_ids shape: torch.Size([16, 256])
556
+ attention_mask shape: torch.Size([16, 256])
557
+ labels shape: torch.Size([16])
558
+ input_ids max value: 27997
559
+ Vocab size: 27998
560
+ Batch 800:
561
+ input_ids shape: torch.Size([16, 256])
562
+ attention_mask shape: torch.Size([16, 256])
563
+ labels shape: torch.Size([16])
564
+ input_ids max value: 27997
565
+ Vocab size: 27998
566
+ Batch 900:
567
+ input_ids shape: torch.Size([16, 256])
568
+ attention_mask shape: torch.Size([16, 256])
569
+ labels shape: torch.Size([16])
570
+ input_ids max value: 27997
571
+ Vocab size: 27998
572
+ Epoch 3/5:
573
+ Train Loss: 0.5660, Train Accuracy: 0.8135
574
+ Val Loss: 0.6397, Val Accuracy: 0.7983, Val F1: 0.7548
575
+ Batch 0:
576
+ input_ids shape: torch.Size([16, 256])
577
+ attention_mask shape: torch.Size([16, 256])
578
+ labels shape: torch.Size([16])
579
+ input_ids max value: 27997
580
+ Vocab size: 27998
581
+ /gpfsdswork/projects/rech/fmr/uft12cr/finetuneAli/FIneTune_withPlots.py:202: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cuda', args...)` instead.
582
+ with amp.autocast():
583
+ Batch 100:
584
+ input_ids shape: torch.Size([16, 256])
585
+ attention_mask shape: torch.Size([16, 256])
586
+ labels shape: torch.Size([16])
587
+ input_ids max value: 27997
588
+ Vocab size: 27998
589
+ Batch 200:
590
+ input_ids shape: torch.Size([16, 256])
591
+ attention_mask shape: torch.Size([16, 256])
592
+ labels shape: torch.Size([16])
593
+ input_ids max value: 27997
594
+ Vocab size: 27998
595
+ Batch 300:
596
+ input_ids shape: torch.Size([16, 256])
597
+ attention_mask shape: torch.Size([16, 256])
598
+ labels shape: torch.Size([16])
599
+ input_ids max value: 27997
600
+ Vocab size: 27998
601
+ Batch 400:
602
+ input_ids shape: torch.Size([16, 256])
603
+ attention_mask shape: torch.Size([16, 256])
604
+ labels shape: torch.Size([16])
605
+ input_ids max value: 27997
606
+ Vocab size: 27998
607
+ Batch 500:
608
+ input_ids shape: torch.Size([16, 256])
609
+ attention_mask shape: torch.Size([16, 256])
610
+ labels shape: torch.Size([16])
611
+ input_ids max value: 27997
612
+ Vocab size: 27998
613
+ Batch 600:
614
+ input_ids shape: torch.Size([16, 256])
615
+ attention_mask shape: torch.Size([16, 256])
616
+ labels shape: torch.Size([16])
617
+ input_ids max value: 27997
618
+ Vocab size: 27998
619
+ Batch 700:
620
+ input_ids shape: torch.Size([16, 256])
621
+ attention_mask shape: torch.Size([16, 256])
622
+ labels shape: torch.Size([16])
623
+ input_ids max value: 27997
624
+ Vocab size: 27998
625
+ Batch 800:
626
+ input_ids shape: torch.Size([16, 256])
627
+ attention_mask shape: torch.Size([16, 256])
628
+ labels shape: torch.Size([16])
629
+ input_ids max value: 27997
630
+ Vocab size: 27998
631
+ Batch 900:
632
+ input_ids shape: torch.Size([16, 256])
633
+ attention_mask shape: torch.Size([16, 256])
634
+ labels shape: torch.Size([16])
635
+ input_ids max value: 27997
636
+ Vocab size: 27998
637
+ Epoch 4/5:
638
+ Train Loss: 0.4725, Train Accuracy: 0.8545
639
+ Val Loss: 0.7259, Val Accuracy: 0.7707, Val F1: 0.7672
640
+ Batch 0:
641
+ input_ids shape: torch.Size([16, 256])
642
+ attention_mask shape: torch.Size([16, 256])
643
+ labels shape: torch.Size([16])
644
+ input_ids max value: 27997
645
+ Vocab size: 27998
646
+ /gpfsdswork/projects/rech/fmr/uft12cr/finetuneAli/FIneTune_withPlots.py:202: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cuda', args...)` instead.
647
+ with amp.autocast():
648
+ Batch 100:
649
+ input_ids shape: torch.Size([16, 256])
650
+ attention_mask shape: torch.Size([16, 256])
651
+ labels shape: torch.Size([16])
652
+ input_ids max value: 27997
653
+ Vocab size: 27998
654
+ Batch 200:
655
+ input_ids shape: torch.Size([16, 256])
656
+ attention_mask shape: torch.Size([16, 256])
657
+ labels shape: torch.Size([16])
658
+ input_ids max value: 27997
659
+ Vocab size: 27998
660
+ Batch 300:
661
+ input_ids shape: torch.Size([16, 256])
662
+ attention_mask shape: torch.Size([16, 256])
663
+ labels shape: torch.Size([16])
664
+ input_ids max value: 27997
665
+ Vocab size: 27998
666
+ Batch 400:
667
+ input_ids shape: torch.Size([16, 256])
668
+ attention_mask shape: torch.Size([16, 256])
669
+ labels shape: torch.Size([16])
670
+ input_ids max value: 27997
671
+ Vocab size: 27998
672
+ Batch 500:
673
+ input_ids shape: torch.Size([16, 256])
674
+ attention_mask shape: torch.Size([16, 256])
675
+ labels shape: torch.Size([16])
676
+ input_ids max value: 27997
677
+ Vocab size: 27998
678
+ Batch 600:
679
+ input_ids shape: torch.Size([16, 256])
680
+ attention_mask shape: torch.Size([16, 256])
681
+ labels shape: torch.Size([16])
682
+ input_ids max value: 27997
683
+ Vocab size: 27998
684
+ Batch 700:
685
+ input_ids shape: torch.Size([16, 256])
686
+ attention_mask shape: torch.Size([16, 256])
687
+ labels shape: torch.Size([16])
688
+ input_ids max value: 27997
689
+ Vocab size: 27998
690
+ Batch 800:
691
+ input_ids shape: torch.Size([16, 256])
692
+ attention_mask shape: torch.Size([16, 256])
693
+ labels shape: torch.Size([16])
694
+ input_ids max value: 27997
695
+ Vocab size: 27998
696
+ Batch 900:
697
+ input_ids shape: torch.Size([16, 256])
698
+ attention_mask shape: torch.Size([16, 256])
699
+ labels shape: torch.Size([16])
700
+ input_ids max value: 27997
701
+ Vocab size: 27998
702
+ Epoch 5/5:
703
+ Train Loss: 0.3889, Train Accuracy: 0.8792
704
+ Val Loss: 0.5967, Val Accuracy: 0.8174, Val F1: 0.7926
705
+
706
+ Test Results for Final tokenizer:
707
+ Accuracy: 0.8174
708
+ F1 Score: 0.7925
709
+ AUC-ROC: 0.8663
710
+
711
+ Training with General tokenizer:
712
+ Vocabulary size: 30522
713
+ Could not load pretrained weights from /gpfswork/rech/fmr/uft12cr/finetuneAli/Bert_Model. Starting with random weights. Error: Error while deserializing header: HeaderTooLarge
714
+ Initialized model with vocabulary size: 30522
715
+ /gpfsdswork/projects/rech/fmr/uft12cr/finetuneAli/FIneTune_withPlots.py:173: FutureWarning: `torch.cuda.amp.GradScaler(args...)` is deprecated. Please use `torch.amp.GradScaler('cuda', args...)` instead.
716
+ scaler = amp.GradScaler()
717
+ Batch 0:
718
+ input_ids shape: torch.Size([16, 256])
719
+ attention_mask shape: torch.Size([16, 256])
720
+ labels shape: torch.Size([16])
721
+ input_ids max value: 29605
722
+ Vocab size: 30522
723
+ /gpfsdswork/projects/rech/fmr/uft12cr/finetuneAli/FIneTune_withPlots.py:202: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cuda', args...)` instead.
724
+ with amp.autocast():
725
+ Batch 100:
726
+ input_ids shape: torch.Size([16, 256])
727
+ attention_mask shape: torch.Size([16, 256])
728
+ labels shape: torch.Size([16])
729
+ input_ids max value: 29438
730
+ Vocab size: 30522
731
+ Batch 200:
732
+ input_ids shape: torch.Size([16, 256])
733
+ attention_mask shape: torch.Size([16, 256])
734
+ labels shape: torch.Size([16])
735
+ input_ids max value: 29300
736
+ Vocab size: 30522
737
+ Batch 300:
738
+ input_ids shape: torch.Size([16, 256])
739
+ attention_mask shape: torch.Size([16, 256])
740
+ labels shape: torch.Size([16])
741
+ input_ids max value: 29464
742
+ Vocab size: 30522
743
+ Batch 400:
744
+ input_ids shape: torch.Size([16, 256])
745
+ attention_mask shape: torch.Size([16, 256])
746
+ labels shape: torch.Size([16])
747
+ input_ids max value: 29494
748
+ Vocab size: 30522
749
+ Batch 500:
750
+ input_ids shape: torch.Size([16, 256])
751
+ attention_mask shape: torch.Size([16, 256])
752
+ labels shape: torch.Size([16])
753
+ input_ids max value: 29464
754
+ Vocab size: 30522
755
+ Batch 600:
756
+ input_ids shape: torch.Size([16, 256])
757
+ attention_mask shape: torch.Size([16, 256])
758
+ labels shape: torch.Size([16])
759
+ input_ids max value: 29464
760
+ Vocab size: 30522
761
+ Batch 700:
762
+ input_ids shape: torch.Size([16, 256])
763
+ attention_mask shape: torch.Size([16, 256])
764
+ labels shape: torch.Size([16])
765
+ input_ids max value: 29464
766
+ Vocab size: 30522
767
+ Batch 800:
768
+ input_ids shape: torch.Size([16, 256])
769
+ attention_mask shape: torch.Size([16, 256])
770
+ labels shape: torch.Size([16])
771
+ input_ids max value: 29340
772
+ Vocab size: 30522
773
+ Batch 900:
774
+ input_ids shape: torch.Size([16, 256])
775
+ attention_mask shape: torch.Size([16, 256])
776
+ labels shape: torch.Size([16])
777
+ input_ids max value: 29454
778
+ Vocab size: 30522
779
+ Epoch 1/5:
780
+ Train Loss: 0.8557, Train Accuracy: 0.7257
781
+ Val Loss: 0.6864, Val Accuracy: 0.7724, Val F1: 0.7309
782
+ Batch 0:
783
+ input_ids shape: torch.Size([16, 256])
784
+ attention_mask shape: torch.Size([16, 256])
785
+ labels shape: torch.Size([16])
786
+ input_ids max value: 29300
787
+ Vocab size: 30522
788
+ /gpfsdswork/projects/rech/fmr/uft12cr/finetuneAli/FIneTune_withPlots.py:202: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cuda', args...)` instead.
789
+ with amp.autocast():
790
+ Batch 100:
791
+ input_ids shape: torch.Size([16, 256])
792
+ attention_mask shape: torch.Size([16, 256])
793
+ labels shape: torch.Size([16])
794
+ input_ids max value: 29464
795
+ Vocab size: 30522
796
+ Batch 200:
797
+ input_ids shape: torch.Size([16, 256])
798
+ attention_mask shape: torch.Size([16, 256])
799
+ labels shape: torch.Size([16])
800
+ input_ids max value: 29494
801
+ Vocab size: 30522
802
+ Batch 300:
803
+ input_ids shape: torch.Size([16, 256])
804
+ attention_mask shape: torch.Size([16, 256])
805
+ labels shape: torch.Size([16])
806
+ input_ids max value: 29474
807
+ Vocab size: 30522
808
+ Batch 400:
809
+ input_ids shape: torch.Size([16, 256])
810
+ attention_mask shape: torch.Size([16, 256])
811
+ labels shape: torch.Size([16])
812
+ input_ids max value: 29535
813
+ Vocab size: 30522
814
+ Batch 500:
815
+ input_ids shape: torch.Size([16, 256])
816
+ attention_mask shape: torch.Size([16, 256])
817
+ labels shape: torch.Size([16])
818
+ input_ids max value: 29577
819
+ Vocab size: 30522
820
+ Batch 600:
821
+ input_ids shape: torch.Size([16, 256])
822
+ attention_mask shape: torch.Size([16, 256])
823
+ labels shape: torch.Size([16])
824
+ input_ids max value: 29598
825
+ Vocab size: 30522
826
+ Batch 700:
827
+ input_ids shape: torch.Size([16, 256])
828
+ attention_mask shape: torch.Size([16, 256])
829
+ labels shape: torch.Size([16])
830
+ input_ids max value: 29605
831
+ Vocab size: 30522
832
+ Batch 800:
833
+ input_ids shape: torch.Size([16, 256])
834
+ attention_mask shape: torch.Size([16, 256])
835
+ labels shape: torch.Size([16])
836
+ input_ids max value: 29160
837
+ Vocab size: 30522
838
+ Batch 900:
839
+ input_ids shape: torch.Size([16, 256])
840
+ attention_mask shape: torch.Size([16, 256])
841
+ labels shape: torch.Size([16])
842
+ input_ids max value: 29532
843
+ Vocab size: 30522
844
+ Epoch 2/5:
845
+ Train Loss: 0.5995, Train Accuracy: 0.8029
846
+ Val Loss: 0.6449, Val Accuracy: 0.7882, Val F1: 0.7366
847
+ Batch 0:
848
+ input_ids shape: torch.Size([16, 256])
849
+ attention_mask shape: torch.Size([16, 256])
850
+ labels shape: torch.Size([16])
851
+ input_ids max value: 29536
852
+ Vocab size: 30522
853
+ /gpfsdswork/projects/rech/fmr/uft12cr/finetuneAli/FIneTune_withPlots.py:202: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cuda', args...)` instead.
854
+ with amp.autocast():
855
+ Batch 100:
856
+ input_ids shape: torch.Size([16, 256])
857
+ attention_mask shape: torch.Size([16, 256])
858
+ labels shape: torch.Size([16])
859
+ input_ids max value: 29464
860
+ Vocab size: 30522
861
+ Batch 200:
862
+ input_ids shape: torch.Size([16, 256])
863
+ attention_mask shape: torch.Size([16, 256])
864
+ labels shape: torch.Size([16])
865
+ input_ids max value: 29536
866
+ Vocab size: 30522
867
+ Batch 300:
868
+ input_ids shape: torch.Size([16, 256])
869
+ attention_mask shape: torch.Size([16, 256])
870
+ labels shape: torch.Size([16])
871
+ input_ids max value: 29464
872
+ Vocab size: 30522
873
+ Batch 400:
874
+ input_ids shape: torch.Size([16, 256])
875
+ attention_mask shape: torch.Size([16, 256])
876
+ labels shape: torch.Size([16])
877
+ input_ids max value: 29464
878
+ Vocab size: 30522
879
+ Batch 500:
880
+ input_ids shape: torch.Size([16, 256])
881
+ attention_mask shape: torch.Size([16, 256])
882
+ labels shape: torch.Size([16])
883
+ input_ids max value: 29464
884
+ Vocab size: 30522
885
+ Batch 600:
886
+ input_ids shape: torch.Size([16, 256])
887
+ attention_mask shape: torch.Size([16, 256])
888
+ labels shape: torch.Size([16])
889
+ input_ids max value: 29413
890
+ Vocab size: 30522
891
+ Batch 700:
892
+ input_ids shape: torch.Size([16, 256])
893
+ attention_mask shape: torch.Size([16, 256])
894
+ labels shape: torch.Size([16])
895
+ input_ids max value: 29346
896
+ Vocab size: 30522
897
+ Batch 800:
898
+ input_ids shape: torch.Size([16, 256])
899
+ attention_mask shape: torch.Size([16, 256])
900
+ labels shape: torch.Size([16])
901
+ input_ids max value: 29451
902
+ Vocab size: 30522
903
+ Batch 900:
904
+ input_ids shape: torch.Size([16, 256])
905
+ attention_mask shape: torch.Size([16, 256])
906
+ labels shape: torch.Size([16])
907
+ input_ids max value: 29280
908
+ Vocab size: 30522
909
+ Epoch 3/5:
910
+ Train Loss: 0.5332, Train Accuracy: 0.8291
911
+ Val Loss: 0.6577, Val Accuracy: 0.7942, Val F1: 0.7687
912
+ Batch 0:
913
+ input_ids shape: torch.Size([16, 256])
914
+ attention_mask shape: torch.Size([16, 256])
915
+ labels shape: torch.Size([16])
916
+ input_ids max value: 29464
917
+ Vocab size: 30522
918
+ /gpfsdswork/projects/rech/fmr/uft12cr/finetuneAli/FIneTune_withPlots.py:202: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cuda', args...)` instead.
919
+ with amp.autocast():
920
+ Batch 100:
921
+ input_ids shape: torch.Size([16, 256])
922
+ attention_mask shape: torch.Size([16, 256])
923
+ labels shape: torch.Size([16])
924
+ input_ids max value: 29464
925
+ Vocab size: 30522
926
+ Batch 200:
927
+ input_ids shape: torch.Size([16, 256])
928
+ attention_mask shape: torch.Size([16, 256])
929
+ labels shape: torch.Size([16])
930
+ input_ids max value: 29535
931
+ Vocab size: 30522
932
+ Batch 300:
933
+ input_ids shape: torch.Size([16, 256])
934
+ attention_mask shape: torch.Size([16, 256])
935
+ labels shape: torch.Size([16])
936
+ input_ids max value: 29413
937
+ Vocab size: 30522
938
+ Batch 400:
939
+ input_ids shape: torch.Size([16, 256])
940
+ attention_mask shape: torch.Size([16, 256])
941
+ labels shape: torch.Size([16])
942
+ input_ids max value: 29461
943
+ Vocab size: 30522
944
+ Batch 500:
945
+ input_ids shape: torch.Size([16, 256])
946
+ attention_mask shape: torch.Size([16, 256])
947
+ labels shape: torch.Size([16])
948
+ input_ids max value: 29536
949
+ Vocab size: 30522
950
+ Batch 600:
951
+ input_ids shape: torch.Size([16, 256])
952
+ attention_mask shape: torch.Size([16, 256])
953
+ labels shape: torch.Size([16])
954
+ input_ids max value: 29300
955
+ Vocab size: 30522
956
+ Batch 700:
957
+ input_ids shape: torch.Size([16, 256])
958
+ attention_mask shape: torch.Size([16, 256])
959
+ labels shape: torch.Size([16])
960
+ input_ids max value: 29536
961
+ Vocab size: 30522
962
+ Batch 800:
963
+ input_ids shape: torch.Size([16, 256])
964
+ attention_mask shape: torch.Size([16, 256])
965
+ labels shape: torch.Size([16])
966
+ input_ids max value: 29513
967
+ Vocab size: 30522
968
+ Batch 900:
969
+ input_ids shape: torch.Size([16, 256])
970
+ attention_mask shape: torch.Size([16, 256])
971
+ labels shape: torch.Size([16])
972
+ input_ids max value: 29536
973
+ Vocab size: 30522
974
+ Epoch 4/5:
975
+ Train Loss: 0.4665, Train Accuracy: 0.8555
976
+ Val Loss: 0.6495, Val Accuracy: 0.7931, Val F1: 0.7709
977
+ Batch 0:
978
+ input_ids shape: torch.Size([16, 256])
979
+ attention_mask shape: torch.Size([16, 256])
980
+ labels shape: torch.Size([16])
981
+ input_ids max value: 29454
982
+ Vocab size: 30522
983
+ /gpfsdswork/projects/rech/fmr/uft12cr/finetuneAli/FIneTune_withPlots.py:202: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cuda', args...)` instead.
984
+ with amp.autocast():
985
+ Batch 100:
986
+ input_ids shape: torch.Size([16, 256])
987
+ attention_mask shape: torch.Size([16, 256])
988
+ labels shape: torch.Size([16])
989
+ input_ids max value: 29598
990
+ Vocab size: 30522
991
+ Batch 200:
992
+ input_ids shape: torch.Size([16, 256])
993
+ attention_mask shape: torch.Size([16, 256])
994
+ labels shape: torch.Size([16])
995
+ input_ids max value: 29336
996
+ Vocab size: 30522
997
+ Batch 300:
998
+ input_ids shape: torch.Size([16, 256])
999
+ attention_mask shape: torch.Size([16, 256])
1000
+ labels shape: torch.Size([16])
1001
+ input_ids max value: 29602
1002
+ Vocab size: 30522
1003
+ Batch 400:
1004
+ input_ids shape: torch.Size([16, 256])
1005
+ attention_mask shape: torch.Size([16, 256])
1006
+ labels shape: torch.Size([16])
1007
+ input_ids max value: 29598
1008
+ Vocab size: 30522
1009
+ Batch 500:
1010
+ input_ids shape: torch.Size([16, 256])
1011
+ attention_mask shape: torch.Size([16, 256])
1012
+ labels shape: torch.Size([16])
1013
+ input_ids max value: 29464
1014
+ Vocab size: 30522
1015
+ Batch 600:
1016
+ input_ids shape: torch.Size([16, 256])
1017
+ attention_mask shape: torch.Size([16, 256])
1018
+ labels shape: torch.Size([16])
1019
+ input_ids max value: 29513
1020
+ Vocab size: 30522
1021
+ Batch 700:
1022
+ input_ids shape: torch.Size([16, 256])
1023
+ attention_mask shape: torch.Size([16, 256])
1024
+ labels shape: torch.Size([16])
1025
+ input_ids max value: 29464
1026
+ Vocab size: 30522
1027
+ Batch 800:
1028
+ input_ids shape: torch.Size([16, 256])
1029
+ attention_mask shape: torch.Size([16, 256])
1030
+ labels shape: torch.Size([16])
1031
+ input_ids max value: 29536
1032
+ Vocab size: 30522
1033
+ Batch 900:
1034
+ input_ids shape: torch.Size([16, 256])
1035
+ attention_mask shape: torch.Size([16, 256])
1036
+ labels shape: torch.Size([16])
1037
+ input_ids max value: 29535
1038
+ Vocab size: 30522
1039
+ Epoch 5/5:
1040
+ Train Loss: 0.3991, Train Accuracy: 0.8781
1041
+ Val Loss: 0.6572, Val Accuracy: 0.7948, Val F1: 0.7804
1042
+
1043
+ Test Results for General tokenizer:
1044
+ Accuracy: 0.7945
1045
+ F1 Score: 0.7802
1046
+ AUC-ROC: 0.8825
1047
+
1048
+ Summary of Results:
1049
+
1050
+ All Cluster Tokenizer:
1051
+ Accuracy: 0.8125
1052
+ F1 Score: 0.8071
1053
+ AUC-ROC: 0.8733
1054
+
1055
+ Final Tokenizer:
1056
+ Accuracy: 0.8174
1057
+ F1 Score: 0.7925
1058
+ AUC-ROC: 0.8663
1059
+
1060
+ General Tokenizer:
1061
+ Accuracy: 0.7945
1062
+ F1 Score: 0.7802
1063
+ AUC-ROC: 0.8825
1064
+
1065
+ Class distribution in training set:
1066
+ Class Biology: 439 samples
1067
+ Class Chemistry: 454 samples
1068
+ Class Computer Science: 1358 samples
1069
+ Class Mathematics: 9480 samples
1070
+ Class Physics: 2733 samples
1071
+ Class Statistics: 200 samples