File size: 8,450 Bytes
3be4547
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
Loading pytorch-gpu/py3/2.1.1
  Loading requirement: cuda/11.8.0 nccl/2.18.5-1-cuda cudnn/8.7.0.84-cuda
    gcc/8.5.0 openmpi/4.1.5-cuda intel-mkl/2020.4 magma/2.7.1-cuda sox/14.4.2
    sparsehash/2.0.3 libjpeg-turbo/2.1.3 ffmpeg/4.4.4
+ HF_DATASETS_OFFLINE=1
+ TRANSFORMERS_OFFLINE=1
+ python3 OnlyGeneralTokenizer.py

Checking label assignment:

Domain: Mathematics
Categories: math.DS math.CA
Abstract: we prove an inequality for holder continuous differential forms on compact manifolds in which the in...

Domain: Computer Science
Categories: cs.NE
Abstract: when looking for a solution deterministic methods have the enormous advantage that they do find glob...

Domain: Physics
Categories: physics.hist-ph quant-ph
Abstract: maxwells demon was born in and still thrives in modern physics he plays important roles in clarifyin...

Domain: Chemistry
Categories: nlin.PS
Abstract: the modulational instability of two interacting waves in a nonlocal kerrtype medium is considered an...

Domain: Statistics
Categories: astro-ph stat.ME
Abstract: the identification of increasingly smaller signal from objects observed with a nonperfect instrument...

Domain: Biology
Categories: q-bio.MN cond-mat.stat-mech
Abstract: we find that discrete noise of inhibiting signal molecules can greatly delay the extinction of plasm...
/linkhome/rech/genrug01/uft12cr/.local/lib/python3.11/site-packages/transformers/tokenization_utils_base.py:2057: FutureWarning: Calling BertTokenizer.from_pretrained() with the path to a single file or url is deprecated and won't be possible anymore in v5. Use a model identifier or the path to a directory instead.
  warnings.warn(

Training with General tokenizer:
Vocabulary size: 30522
Could not load pretrained weights from /linkhome/rech/genrug01/uft12cr/bert_Model. Starting with random weights. Error: It looks like the config file at '/linkhome/rech/genrug01/uft12cr/bert_Model/config.json' is not a valid JSON file.
Initialized model with vocabulary size: 30522
/gpfsdswork/projects/rech/fmr/uft12cr/finetuneAli/OnlyGeneralTokenizer.py:172: FutureWarning: `torch.cuda.amp.GradScaler(args...)` is deprecated. Please use `torch.amp.GradScaler('cuda', args...)` instead.
  scaler = amp.GradScaler()
Batch 0:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 29464
Vocab size: 30522
/gpfsdswork/projects/rech/fmr/uft12cr/finetuneAli/OnlyGeneralTokenizer.py:192: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cuda', args...)` instead.
  with amp.autocast():
Batch 100:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 29536
Vocab size: 30522
Batch 200:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 29536
Vocab size: 30522
Batch 300:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 29536
Vocab size: 30522
Batch 400:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 29513
Vocab size: 30522
Batch 500:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 29413
Vocab size: 30522
Batch 600:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 29237
Vocab size: 30522
Batch 700:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 29586
Vocab size: 30522
Batch 800:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 29221
Vocab size: 30522
Batch 900:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 29514
Vocab size: 30522
Epoch 1/3:
Val Accuracy: 0.7306, Val F1: 0.6541
Batch 0:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 29602
Vocab size: 30522
/gpfsdswork/projects/rech/fmr/uft12cr/finetuneAli/OnlyGeneralTokenizer.py:192: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cuda', args...)` instead.
  with amp.autocast():
Batch 100:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 29374
Vocab size: 30522
Batch 200:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 29601
Vocab size: 30522
Batch 300:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 29464
Vocab size: 30522
Batch 400:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 29535
Vocab size: 30522
Batch 500:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 29464
Vocab size: 30522
Batch 600:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 29602
Vocab size: 30522
Batch 700:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 29454
Vocab size: 30522
Batch 800:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 29280
Vocab size: 30522
Batch 900:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 29417
Vocab size: 30522
Epoch 2/3:
Val Accuracy: 0.7961, Val F1: 0.7582
Batch 0:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 29299
Vocab size: 30522
/gpfsdswork/projects/rech/fmr/uft12cr/finetuneAli/OnlyGeneralTokenizer.py:192: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cuda', args...)` instead.
  with amp.autocast():
Batch 100:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 29577
Vocab size: 30522
Batch 200:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 29536
Vocab size: 30522
Batch 300:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 29451
Vocab size: 30522
Batch 400:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 29454
Vocab size: 30522
Batch 500:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 29532
Vocab size: 30522
Batch 600:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 29413
Vocab size: 30522
Batch 700:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 29586
Vocab size: 30522
Batch 800:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 29280
Vocab size: 30522
Batch 900:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 29494
Vocab size: 30522
Epoch 3/3:
Val Accuracy: 0.8204, Val F1: 0.7894

Test Results for General tokenizer:
Accuracy: 0.8204
F1 Score: 0.7893
AUC-ROC: 0.8693

Class distribution in training set:
Class Biology: 439 samples
Class Chemistry: 454 samples
Class Computer Science: 1358 samples
Class Mathematics: 9480 samples
Class Physics: 2733 samples
Class Statistics: 200 samples