eacortes commited on
Commit
d6ebf8d
·
verified ·
1 Parent(s): e8b0600

Update README and add additional benchmarking logs

Browse files
Files changed (14) hide show
  1. README.md +184 -18
  2. logs_modchembert_classification_ModChemBERT-MLM-DAPT-TAFT-OPT/modchembert_deepchem_splits_run_antimalarial_epochs100_batch_size16_20250926_211449.log +355 -0
  3. logs_modchembert_classification_ModChemBERT-MLM-DAPT-TAFT-OPT/modchembert_deepchem_splits_run_cocrystal_epochs100_batch_size32_20250927_065415.log +343 -0
  4. logs_modchembert_classification_ModChemBERT-MLM-DAPT-TAFT-OPT/modchembert_deepchem_splits_run_covid19_epochs100_batch_size32_20250927_065342.log +331 -0
  5. logs_modchembert_regression_ModChemBERT-MLM-DAPT-TAFT-OPT/modchembert_deepchem_splits_run_adme_microsom_stab_h_epochs100_batch_size32_20250926_053902.log +361 -0
  6. logs_modchembert_regression_ModChemBERT-MLM-DAPT-TAFT-OPT/modchembert_deepchem_splits_run_adme_microsom_stab_r_epochs100_batch_size16_20250927_144017.log +325 -0
  7. logs_modchembert_regression_ModChemBERT-MLM-DAPT-TAFT-OPT/modchembert_deepchem_splits_run_adme_permeability_epochs100_batch_size8_20250927_085030.log +379 -0
  8. logs_modchembert_regression_ModChemBERT-MLM-DAPT-TAFT-OPT/modchembert_deepchem_splits_run_adme_ppb_h_epochs100_batch_size32_20250927_084912.log +337 -0
  9. logs_modchembert_regression_ModChemBERT-MLM-DAPT-TAFT-OPT/modchembert_deepchem_splits_run_adme_ppb_r_epochs100_batch_size32_20250927_153939.log +421 -0
  10. logs_modchembert_regression_ModChemBERT-MLM-DAPT-TAFT-OPT/modchembert_deepchem_splits_run_adme_solubility_epochs100_batch_size32_20250927_162635.log +329 -0
  11. logs_modchembert_regression_ModChemBERT-MLM-DAPT-TAFT-OPT/modchembert_deepchem_splits_run_astrazeneca_cl_epochs100_batch_size32_20250926_091804.log +323 -0
  12. logs_modchembert_regression_ModChemBERT-MLM-DAPT-TAFT-OPT/modchembert_deepchem_splits_run_astrazeneca_logd74_epochs100_batch_size16_20250927_204252.log +365 -0
  13. logs_modchembert_regression_ModChemBERT-MLM-DAPT-TAFT-OPT/modchembert_deepchem_splits_run_astrazeneca_ppb_epochs100_batch_size32_20250927_114432.log +391 -0
  14. logs_modchembert_regression_ModChemBERT-MLM-DAPT-TAFT-OPT/modchembert_deepchem_splits_run_astrazeneca_solubility_epochs100_batch_size32_20250927_155133.log +391 -0
README.md CHANGED
@@ -118,6 +118,123 @@ model-index:
118
  metrics:
119
  - type: rmse
120
  value: 0.6505
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
121
  ---
122
 
123
  # ModChemBERT: ModernBERT as a Chemical Language Model
@@ -159,10 +276,10 @@ print(fill("c1ccccc1[MASK]"))
159
  - Encoder Layers: 22
160
  - Attention heads: 12
161
  - Max sequence length: 256 tokens (MLM primarily trained with 128-token sequences)
162
- - Vocabulary: BPE tokenizer using [MolFormer's vocab](https://github.com/emapco/ModChemBERT/blob/main/modchembert/tokenizers/molformer/vocab.json) (2362 tokens)
163
 
164
  ## Pooling (Classifier / Regressor Head)
165
- Kallergis et al. [1] demonstrated that the CLM embedding method prior to the prediction head can significantly impact downstream performance.
166
 
167
  Behrendt et al. [2] noted that the last few layers contain task-specific information and that pooling methods leveraging information from multiple layers can enhance model performance. Their results further demonstrated that the `max_seq_mha` pooling method was particularly effective in low-data regimes, which is often the case for molecular property prediction tasks.
168
 
@@ -178,6 +295,9 @@ Multiple pooling strategies are supported by ModChemBERT to explore their impact
178
  - `mean_sum`: Mean over all layers then sum tokens
179
  - `max_seq_mean`: Max over last k layers then mean tokens
180
 
 
 
 
181
  ## Training Pipeline
182
  <div align="center">
183
  <img src="https://cdn-uploads.huggingface.co/production/uploads/656892962693fa22e18b5331/bxNbpgMkU8m60ypyEJoWQ.png" alt="ModChemBERT Training Pipeline" width="650"/>
@@ -190,23 +310,33 @@ Following Sultan et al. [3], multi-task regression (physicochemical properties)
190
  Inspired by ModernBERT [4], JaColBERTv2.5 [5], and Llama 3.1 [6], where results show that model merging can enhance generalization or performance while mitigating overfitting to any single fine-tune or annealing checkpoint.
191
 
192
  ## Datasets
193
- - Pretraining: [Derify/augmented_canonical_druglike_QED_Pfizer_15M](https://huggingface.co/datasets/Derify/augmented_canonical_druglike_QED_Pfizer_15M)
194
- - Domain Adaptive Pretraining (DAPT) & Task Adaptive Fine-tuning (TAFT): ADME + AstraZeneca datasets (10 tasks) with scaffold splits from DA4MT pipeline (see [domain-adaptation-molecular-transformers](https://github.com/emapco/ModChemBERT/tree/main/domain-adaptation-molecular-transformers))
195
- - Benchmarking: ChemBERTa-3 [7] tasks (BACE, BBBP, TOX21, HIV, SIDER, CLINTOX for classification; ESOL, FREESOLV, LIPO, BACE, CLEARANCE for regression)
 
 
 
 
 
 
 
 
196
 
197
  ## Benchmarking
198
- Benchmarks were conducted with the ChemBERTa-3 framework using DeepChem scaffold splits. Each task was trained for 100 epochs with 3 random seeds.
 
 
199
 
200
  ### Evaluation Methodology
201
- - Classification Metric: ROC AUC.
202
- - Regression Metric: RMSE.
203
  - Aggregation: Mean ± standard deviation of the triplicate results.
204
- - Input Constraints: SMILES truncated / filtered to ≤200 tokens, following the MolFormer paper's recommendation.
205
 
206
  ### Results
207
  <details><summary>Click to expand</summary>
208
 
209
- #### Classification Datasets (ROC AUC - Higher is better)
210
 
211
  | Model | BACE↑ | BBBP↑ | CLINTOX↑ | HIV↑ | SIDER↑ | TOX21↑ | AVG† |
212
  | ---------------------------------------------------------------------------- | ----------------- | ----------------- | --------------------- | --------------------- | --------------------- | ----------------- | ------ |
@@ -214,14 +344,14 @@ Benchmarks were conducted with the ChemBERTa-3 framework using DeepChem scaffold
214
  | [ChemBERTa-100M-MLM](https://huggingface.co/DeepChem/ChemBERTa-100M-MLM)* | 0.781 ± 0.019 | 0.700 ± 0.027 | 0.979 ± 0.022 | 0.740 ± 0.013 | 0.611 ± 0.002 | 0.718 ± 0.011 | 0.7548 |
215
  | [c3-MoLFormer-1.1B](https://huggingface.co/DeepChem/MoLFormer-c3-1.1B)* | 0.819 ± 0.019 | 0.735 ± 0.019 | 0.839 ± 0.013 | 0.762 ± 0.005 | 0.618 ± 0.005 | 0.723 ± 0.012 | 0.7493 |
216
  | MoLFormer-LHPC* | **0.887 ± 0.004** | **0.908 ± 0.013** | 0.993 ± 0.004 | 0.750 ± 0.003 | 0.622 ± 0.007 | **0.791 ± 0.014** | 0.8252 |
217
- | ------------------------- | ----------------- | ----------------- | ------------------- | ------------------- | ------------------- | ----------------- | ------ |
218
  | [MLM](https://huggingface.co/Derify/ModChemBERT-MLM) | 0.8065 ± 0.0103 | 0.7222 ± 0.0150 | 0.9709 ± 0.0227 | ***0.7800 ± 0.0133*** | 0.6419 ± 0.0113 | 0.7400 ± 0.0044 | 0.7769 |
219
  | [MLM + DAPT](https://huggingface.co/Derify/ModChemBERT-MLM-DAPT) | 0.8224 ± 0.0156 | 0.7402 ± 0.0095 | 0.9820 ± 0.0138 | 0.7702 ± 0.0020 | 0.6303 ± 0.0039 | 0.7360 ± 0.0036 | 0.7802 |
220
  | [MLM + TAFT](https://huggingface.co/Derify/ModChemBERT-MLM-TAFT) | 0.7924 ± 0.0155 | 0.7282 ± 0.0058 | 0.9725 ± 0.0213 | 0.7770 ± 0.0047 | 0.6542 ± 0.0128 | *0.7646 ± 0.0039* | 0.7815 |
221
  | [MLM + DAPT + TAFT](https://huggingface.co/Derify/ModChemBERT-MLM-DAPT-TAFT) | 0.8213 ± 0.0051 | 0.7356 ± 0.0094 | 0.9664 ± 0.0202 | 0.7750 ± 0.0048 | 0.6415 ± 0.0094 | 0.7263 ± 0.0036 | 0.7777 |
222
  | [MLM + DAPT + TAFT OPT](https://huggingface.co/Derify/ModChemBERT) | *0.8346 ± 0.0045* | *0.7573 ± 0.0120* | ***0.9938 ± 0.0017*** | 0.7737 ± 0.0034 | ***0.6600 ± 0.0061*** | 0.7518 ± 0.0047 | 0.7952 |
223
 
224
- #### Regression Datasets (RMSE - Lower is better)
225
 
226
  | Model | BACE↓ | CLEARANCE↓ | ESOL↓ | FREESOLV↓ | LIPO↓ | AVG‡ |
227
  | ---------------------------------------------------------------------------- | --------------------- | ---------------------- | --------------------- | --------------------- | --------------------- | ---------------- |
@@ -229,17 +359,45 @@ Benchmarks were conducted with the ChemBERTa-3 framework using DeepChem scaffold
229
  | [ChemBERTa-100M-MLM](https://huggingface.co/DeepChem/ChemBERTa-100M-MLM)* | 1.011 ± 0.038 | 51.582 ± 3.079 | 0.920 ± 0.011 | 0.536 ± 0.016 | 0.758 ± 0.013 | 0.8063 / 10.9614 |
230
  | [c3-MoLFormer-1.1B](https://huggingface.co/DeepChem/MoLFormer-c3-1.1B)* | 1.094 ± 0.126 | 52.058 ± 2.767 | 0.829 ± 0.019 | 0.572 ± 0.023 | 0.728 ± 0.016 | 0.8058 / 11.0562 |
231
  | MoLFormer-LHPC* | 1.201 ± 0.100 | 45.74 ± 2.637 | 0.848 ± 0.031 | 0.683 ± 0.040 | 0.895 ± 0.080 | 0.9068 / 9.8734 |
232
- | ------------------------- | ------------------- | -------------------- | ------------------- | ------------------- | ------------------- | ---------------- |
233
  | [MLM](https://huggingface.co/Derify/ModChemBERT-MLM) | 1.0893 ± 0.1319 | 49.0005 ± 1.2787 | 0.8456 ± 0.0406 | 0.5491 ± 0.0134 | 0.7147 ± 0.0062 | 0.7997 / 10.4398 |
234
  | [MLM + DAPT](https://huggingface.co/Derify/ModChemBERT-MLM-DAPT) | 0.9931 ± 0.0258 | 45.4951 ± 0.7112 | 0.9319 ± 0.0153 | 0.6049 ± 0.0666 | 0.6874 ± 0.0040 | 0.8043 / 9.7425 |
235
  | [MLM + TAFT](https://huggingface.co/Derify/ModChemBERT-MLM-TAFT) | 1.0304 ± 0.1146 | 47.8418 ± 0.4070 | ***0.7669 ± 0.0024*** | 0.5293 ± 0.0267 | 0.6708 ± 0.0074 | 0.7493 / 10.1678 |
236
  | [MLM + DAPT + TAFT](https://huggingface.co/Derify/ModChemBERT-MLM-DAPT-TAFT) | 0.9713 ± 0.0224 | ***42.8010 ± 3.3475*** | 0.8169 ± 0.0268 | 0.5445 ± 0.0257 | 0.6820 ± 0.0028 | 0.7537 / 9.1631 |
237
  | [MLM + DAPT + TAFT OPT](https://huggingface.co/Derify/ModChemBERT) | ***0.9665 ± 0.0250*** | 44.0137 ± 1.1110 | 0.8158 ± 0.0115 | ***0.4979 ± 0.0158*** | ***0.6505 ± 0.0126*** | 0.7327 / 9.3889 |
238
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
239
  **Bold** indicates the best result in the column; *italic* indicates the best result among ModChemBERT checkpoints.<br/>
240
  \* Published results from the ChemBERTa-3 [7] paper for optimized chemical language models using DeepChem scaffold splits.<br/>
241
- † AVG column shows the mean score across all classification tasks.<br/>
242
- ‡ AVG column shows the mean scores across all regression tasks without and with the clearance score.
243
 
244
  </details>
245
 
@@ -279,6 +437,9 @@ Optimal parameters (per dataset) for the `MLM + DAPT + TAFT OPT` merged model:
279
  | esol | 64 | sum_mean | N/A | 0.1 | 0.0 | 0.1 |
280
  | freesolv | 32 | max_seq_mha | 5 | 0.1 | 0.0 | 0.0 |
281
  | lipo | 32 | max_seq_mha | 3 | 0.1 | 0.1 | 0.1 |
 
 
 
282
 
283
  </details>
284
 
@@ -312,10 +473,15 @@ If you use ModChemBERT in your research, please cite the checkpoint and the foll
312
  ```
313
 
314
  ## References
315
- 1. Kallergis, Georgios, et al. "Domain adaptable language modeling of chemical compounds identifies potent pathoblockers for Pseudomonas aeruginosa." Communications Chemistry 8.1 (2025): 114.
316
  2. Behrendt, Maike, Stefan Sylvius Wagner, and Stefan Harmeling. "MaxPoolBERT: Enhancing BERT Classification via Layer-and Token-Wise Aggregation." arXiv preprint arXiv:2505.15696 (2025).
317
  3. Sultan, Afnan, et al. "Transformers for molecular property prediction: Domain adaptation efficiently improves performance." arXiv preprint arXiv:2503.03360 (2025).
318
  4. Warner, Benjamin, et al. "Smarter, better, faster, longer: A modern bidirectional encoder for fast, memory efficient, and long context finetuning and inference." arXiv preprint arXiv:2412.13663 (2024).
319
- 5. Clavié, Benjamin. "JaColBERTv2.5: Optimising Multi-Vector Retrievers to Create State-of-the-Art Japanese Retrievers with Constrained Resources." Journal of Natural Language Processing 32.1 (2025): 176-218.
320
  6. Grattafiori, Aaron, et al. "The llama 3 herd of models." arXiv preprint arXiv:2407.21783 (2024).
321
- 7. Singh, Riya, et al. "ChemBERTa-3: An Open Source Training Framework for Chemical Foundation Models." (2025).
 
 
 
 
 
 
118
  metrics:
119
  - type: rmse
120
  value: 0.6505
121
+ - task:
122
+ type: text-classification
123
+ name: Classification (ROC AUC)
124
+ dataset:
125
+ name: Antimalarial
126
+ type: Antimalarial
127
+ metrics:
128
+ - type: roc_auc
129
+ value: 0.8966
130
+ - task:
131
+ type: text-classification
132
+ name: Classification (ROC AUC)
133
+ dataset:
134
+ name: Cocrystal
135
+ type: Cocrystal
136
+ metrics:
137
+ - type: roc_auc
138
+ value: 0.8654
139
+ - task:
140
+ type: text-classification
141
+ name: Classification (ROC AUC)
142
+ dataset:
143
+ name: COVID19
144
+ type: COVID19
145
+ metrics:
146
+ - type: roc_auc
147
+ value: 0.8132
148
+ - task:
149
+ type: regression
150
+ name: Regression (RMSE)
151
+ dataset:
152
+ name: ADME microsom stab human
153
+ type: ADME
154
+ metrics:
155
+ - type: rmse
156
+ value: 0.4248
157
+ - task:
158
+ type: regression
159
+ name: Regression (RMSE)
160
+ dataset:
161
+ name: ADME microsom stab rat
162
+ type: ADME
163
+ metrics:
164
+ - type: rmse
165
+ value: 0.4403
166
+ - task:
167
+ type: regression
168
+ name: Regression (RMSE)
169
+ dataset:
170
+ name: ADME permeability
171
+ type: ADME
172
+ metrics:
173
+ - type: rmse
174
+ value: 0.5025
175
+ - task:
176
+ type: regression
177
+ name: Regression (RMSE)
178
+ dataset:
179
+ name: ADME ppb human
180
+ type: ADME
181
+ metrics:
182
+ - type: rmse
183
+ value: 0.8901
184
+ - task:
185
+ type: regression
186
+ name: Regression (RMSE)
187
+ dataset:
188
+ name: ADME ppb rat
189
+ type: ADME
190
+ metrics:
191
+ - type: rmse
192
+ value: 0.7268
193
+ - task:
194
+ type: regression
195
+ name: Regression (RMSE)
196
+ dataset:
197
+ name: ADME solubility
198
+ type: ADME
199
+ metrics:
200
+ - type: rmse
201
+ value: 0.4627
202
+ - task:
203
+ type: regression
204
+ name: Regression (RMSE)
205
+ dataset:
206
+ name: AstraZeneca CL
207
+ type: AstraZeneca
208
+ metrics:
209
+ - type: rmse
210
+ value: 0.4932
211
+ - task:
212
+ type: regression
213
+ name: Regression (RMSE)
214
+ dataset:
215
+ name: AstraZeneca LogD74
216
+ type: AstraZeneca
217
+ metrics:
218
+ - type: rmse
219
+ value: 0.7596
220
+ - task:
221
+ type: regression
222
+ name: Regression (RMSE)
223
+ dataset:
224
+ name: AstraZeneca PPB
225
+ type: AstraZeneca
226
+ metrics:
227
+ - type: rmse
228
+ value: 0.1150
229
+ - task:
230
+ type: regression
231
+ name: Regression (RMSE)
232
+ dataset:
233
+ name: AstraZeneca Solubility
234
+ type: AstraZeneca
235
+ metrics:
236
+ - type: rmse
237
+ value: 0.8735
238
  ---
239
 
240
  # ModChemBERT: ModernBERT as a Chemical Language Model
 
276
  - Encoder Layers: 22
277
  - Attention heads: 12
278
  - Max sequence length: 256 tokens (MLM primarily trained with 128-token sequences)
279
+ - Tokenizer: BPE tokenizer using [MolFormer's vocab](https://github.com/emapco/ModChemBERT/blob/main/modchembert/tokenizers/molformer/vocab.json) (2362 tokens)
280
 
281
  ## Pooling (Classifier / Regressor Head)
282
+ Kallergis et al. [1] demonstrated that the CLM embedding method prior to the prediction head was the strongest contributor to downstream performance among evaluated hyperparameters.
283
 
284
  Behrendt et al. [2] noted that the last few layers contain task-specific information and that pooling methods leveraging information from multiple layers can enhance model performance. Their results further demonstrated that the `max_seq_mha` pooling method was particularly effective in low-data regimes, which is often the case for molecular property prediction tasks.
285
 
 
295
  - `mean_sum`: Mean over all layers then sum tokens
296
  - `max_seq_mean`: Max over last k layers then mean tokens
297
 
298
+ Note: ModChemBERT’s `max_seq_mha` differs from MaxPoolBERT [2]. MaxPoolBERT uses PyTorch `nn.MultiheadAttention`, whereas ModChemBERT's `ModChemBertPoolingAttention` adapts ModernBERT’s `ModernBertAttention`.
299
+ On ChemBERTa-3 benchmarks this variant produced stronger validation metrics and avoided the training instabilities (sporadic zero / NaN losses and gradient norms) seen with `nn.MultiheadAttention`. Training instability with ModernBERT has been reported in the past ([discussion 1](https://huggingface.co/answerdotai/ModernBERT-base/discussions/59) and [discussion 2](https://huggingface.co/answerdotai/ModernBERT-base/discussions/63)).
300
+
301
  ## Training Pipeline
302
  <div align="center">
303
  <img src="https://cdn-uploads.huggingface.co/production/uploads/656892962693fa22e18b5331/bxNbpgMkU8m60ypyEJoWQ.png" alt="ModChemBERT Training Pipeline" width="650"/>
 
310
  Inspired by ModernBERT [4], JaColBERTv2.5 [5], and Llama 3.1 [6], where results show that model merging can enhance generalization or performance while mitigating overfitting to any single fine-tune or annealing checkpoint.
311
 
312
  ## Datasets
313
+ - Pretraining: [Derify/augmented_canonical_druglike_QED_Pfizer_15M](https://huggingface.co/datasets/Derify/augmented_canonical_druglike_QED_Pfizer_15M) (canonical_smiles column)
314
+ - Domain Adaptive Pretraining (DAPT) & Task Adaptive Fine-tuning (TAFT): ADME (6 tasks) + AstraZeneca (4 tasks) datasets that are split using DA4MT's [3] Bemis-Murcko scaffold splitter (see [domain-adaptation-molecular-transformers](https://github.com/emapco/ModChemBERT/blob/main/domain-adaptation-molecular-transformers/da4mt/splitting.py))
315
+ - Benchmarking:
316
+ - ChemBERTa-3 [7]
317
+ - classification: BACE, BBBP, TOX21, HIV, SIDER, CLINTOX
318
+ - regression: ESOL, FREESOLV, LIPO, BACE, CLEARANCE
319
+ - Mswahili, et al. [8] proposed additional datasets for benchmarking chemical language models:
320
+ - classification: Antimalarial [9], Cocrystal [10], COVID19 [11]
321
+ - DAPT/TAFT stage regression datasets:
322
+ - ADME [12]: adme_microsom_stab_h, adme_microsom_stab_r, adme_permeability, adme_ppb_h, adme_ppb_r, adme_solubility
323
+ - AstraZeneca: astrazeneca_CL, astrazeneca_LogD74, astrazeneca_PPB, astrazeneca_Solubility
324
 
325
  ## Benchmarking
326
+ Benchmarks were conducted using the ChemBERTa-3 framework. DeepChem scaffold splits were utilized for all datasets, with the exception of the Antimalarial dataset, which employed a random split. Each task was trained for 100 epochs, with results averaged across 3 random seeds.
327
+
328
+ The complete hyperparameter configurations for these benchmarks are available here: [ChemBERTa3 configs](https://github.com/emapco/ModChemBERT/tree/main/conf/chemberta3)
329
 
330
  ### Evaluation Methodology
331
+ - Classification Metric: ROC AUC
332
+ - Regression Metric: RMSE
333
  - Aggregation: Mean ± standard deviation of the triplicate results.
334
+ - Input Constraints: SMILES truncated / filtered to ≤200 tokens, following ChemBERTa-3's recommendation.
335
 
336
  ### Results
337
  <details><summary>Click to expand</summary>
338
 
339
+ #### ChemBERTa-3 Classification Datasets (ROC AUC - Higher is better)
340
 
341
  | Model | BACE↑ | BBBP↑ | CLINTOX↑ | HIV↑ | SIDER↑ | TOX21↑ | AVG† |
342
  | ---------------------------------------------------------------------------- | ----------------- | ----------------- | --------------------- | --------------------- | --------------------- | ----------------- | ------ |
 
344
  | [ChemBERTa-100M-MLM](https://huggingface.co/DeepChem/ChemBERTa-100M-MLM)* | 0.781 ± 0.019 | 0.700 ± 0.027 | 0.979 ± 0.022 | 0.740 ± 0.013 | 0.611 ± 0.002 | 0.718 ± 0.011 | 0.7548 |
345
  | [c3-MoLFormer-1.1B](https://huggingface.co/DeepChem/MoLFormer-c3-1.1B)* | 0.819 ± 0.019 | 0.735 ± 0.019 | 0.839 ± 0.013 | 0.762 ± 0.005 | 0.618 ± 0.005 | 0.723 ± 0.012 | 0.7493 |
346
  | MoLFormer-LHPC* | **0.887 ± 0.004** | **0.908 ± 0.013** | 0.993 ± 0.004 | 0.750 ± 0.003 | 0.622 ± 0.007 | **0.791 ± 0.014** | 0.8252 |
347
+ | | | | | | | | |
348
  | [MLM](https://huggingface.co/Derify/ModChemBERT-MLM) | 0.8065 ± 0.0103 | 0.7222 ± 0.0150 | 0.9709 ± 0.0227 | ***0.7800 ± 0.0133*** | 0.6419 ± 0.0113 | 0.7400 ± 0.0044 | 0.7769 |
349
  | [MLM + DAPT](https://huggingface.co/Derify/ModChemBERT-MLM-DAPT) | 0.8224 ± 0.0156 | 0.7402 ± 0.0095 | 0.9820 ± 0.0138 | 0.7702 ± 0.0020 | 0.6303 ± 0.0039 | 0.7360 ± 0.0036 | 0.7802 |
350
  | [MLM + TAFT](https://huggingface.co/Derify/ModChemBERT-MLM-TAFT) | 0.7924 ± 0.0155 | 0.7282 ± 0.0058 | 0.9725 ± 0.0213 | 0.7770 ± 0.0047 | 0.6542 ± 0.0128 | *0.7646 ± 0.0039* | 0.7815 |
351
  | [MLM + DAPT + TAFT](https://huggingface.co/Derify/ModChemBERT-MLM-DAPT-TAFT) | 0.8213 ± 0.0051 | 0.7356 ± 0.0094 | 0.9664 ± 0.0202 | 0.7750 ± 0.0048 | 0.6415 ± 0.0094 | 0.7263 ± 0.0036 | 0.7777 |
352
  | [MLM + DAPT + TAFT OPT](https://huggingface.co/Derify/ModChemBERT) | *0.8346 ± 0.0045* | *0.7573 ± 0.0120* | ***0.9938 ± 0.0017*** | 0.7737 ± 0.0034 | ***0.6600 ± 0.0061*** | 0.7518 ± 0.0047 | 0.7952 |
353
 
354
+ #### ChemBERTa-3 Regression Datasets (RMSE - Lower is better)
355
 
356
  | Model | BACE↓ | CLEARANCE↓ | ESOL↓ | FREESOLV↓ | LIPO↓ | AVG‡ |
357
  | ---------------------------------------------------------------------------- | --------------------- | ---------------------- | --------------------- | --------------------- | --------------------- | ---------------- |
 
359
  | [ChemBERTa-100M-MLM](https://huggingface.co/DeepChem/ChemBERTa-100M-MLM)* | 1.011 ± 0.038 | 51.582 ± 3.079 | 0.920 ± 0.011 | 0.536 ± 0.016 | 0.758 ± 0.013 | 0.8063 / 10.9614 |
360
  | [c3-MoLFormer-1.1B](https://huggingface.co/DeepChem/MoLFormer-c3-1.1B)* | 1.094 ± 0.126 | 52.058 ± 2.767 | 0.829 ± 0.019 | 0.572 ± 0.023 | 0.728 ± 0.016 | 0.8058 / 11.0562 |
361
  | MoLFormer-LHPC* | 1.201 ± 0.100 | 45.74 ± 2.637 | 0.848 ± 0.031 | 0.683 ± 0.040 | 0.895 ± 0.080 | 0.9068 / 9.8734 |
362
+ | | | | | | |
363
  | [MLM](https://huggingface.co/Derify/ModChemBERT-MLM) | 1.0893 ± 0.1319 | 49.0005 ± 1.2787 | 0.8456 ± 0.0406 | 0.5491 ± 0.0134 | 0.7147 ± 0.0062 | 0.7997 / 10.4398 |
364
  | [MLM + DAPT](https://huggingface.co/Derify/ModChemBERT-MLM-DAPT) | 0.9931 ± 0.0258 | 45.4951 ± 0.7112 | 0.9319 ± 0.0153 | 0.6049 ± 0.0666 | 0.6874 ± 0.0040 | 0.8043 / 9.7425 |
365
  | [MLM + TAFT](https://huggingface.co/Derify/ModChemBERT-MLM-TAFT) | 1.0304 ± 0.1146 | 47.8418 ± 0.4070 | ***0.7669 ± 0.0024*** | 0.5293 ± 0.0267 | 0.6708 ± 0.0074 | 0.7493 / 10.1678 |
366
  | [MLM + DAPT + TAFT](https://huggingface.co/Derify/ModChemBERT-MLM-DAPT-TAFT) | 0.9713 ± 0.0224 | ***42.8010 ± 3.3475*** | 0.8169 ± 0.0268 | 0.5445 ± 0.0257 | 0.6820 ± 0.0028 | 0.7537 / 9.1631 |
367
  | [MLM + DAPT + TAFT OPT](https://huggingface.co/Derify/ModChemBERT) | ***0.9665 ± 0.0250*** | 44.0137 ± 1.1110 | 0.8158 ± 0.0115 | ***0.4979 ± 0.0158*** | ***0.6505 ± 0.0126*** | 0.7327 / 9.3889 |
368
 
369
+ #### Mswahili, et al. [8] Proposed Classification Datasets (ROC AUC - Higher is better)
370
+
371
+ | Model | Antimalarial↑ | Cocrystal↑ | COVID19↑ | AVG† |
372
+ | ---------------------------------------------------------------------------- | --------------------- | --------------------- | --------------------- | ------ |
373
+ | **Tasks** | 1 | 1 | 1 | |
374
+ | [MLM](https://huggingface.co/Derify/ModChemBERT-MLM) | 0.8707 ± 0.0032 | 0.7967 ± 0.0124 | 0.8106 ± 0.0170 | 0.8260 |
375
+ | [MLM + DAPT](https://huggingface.co/Derify/ModChemBERT-MLM-DAPT) | 0.8756 ± 0.0056 | 0.8288 ± 0.0143 | 0.8029 ± 0.0159 | 0.8358 |
376
+ | [MLM + TAFT](https://huggingface.co/Derify/ModChemBERT-MLM-TAFT) | 0.8832 ± 0.0051 | 0.7866 ± 0.0204 | ***0.8308 ± 0.0026*** | 0.8335 |
377
+ | [MLM + DAPT + TAFT](https://huggingface.co/Derify/ModChemBERT-MLM-DAPT-TAFT) | 0.8819 ± 0.0052 | 0.8550 ± 0.0106 | 0.8013 ± 0.0118 | 0.8461 |
378
+ | [MLM + DAPT + TAFT OPT](https://huggingface.co/Derify/ModChemBERT) | ***0.8966 ± 0.0045*** | ***0.8654 ± 0.0080*** | 0.8132 ± 0.0195 | 0.8584 |
379
+
380
+ #### ADME/AstraZeneca Regression Datasets (RMSE - Lower is better)
381
+
382
+ Hyperparameter optimization for the TAFT stage appears to induce overfitting, as the `MLM + DAPT + TAFT OPT` model shows slightly degraded performance on the ADME/AstraZeneca datasets compared to the `MLM + DAPT + TAFT` model.
383
+ The `MLM + DAPT + TAFT` model, a merge of unoptimized TAFT checkpoints trained with `max_seq_mean` pooling, achieved the best overall performance across the ADME/AstraZeneca datasets.
384
+
385
+ | | ADME | | | | | | AstraZeneca | | | | |
386
+ | ---------------------------------------------------------------------------- | ------------------- | ------------------- | ------------------- | ------------------- | ------------------- | ------------------- | ------------------- | ------------------- | ------------------- | ------------------- | ------ |
387
+ | Model | microsom_stab_h↓ | microsom_stab_r↓ | permeability↓ | ppb_h↓ | ppb_r↓ | solubility↓ | CL↓ | LogD74↓ | PPB↓ | Solubility↓ | AVG† |
388
+ | | | | | | | | | | | |
389
+ | **Tasks** | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | |
390
+ | [MLM](https://huggingface.co/Derify/ModChemBERT-MLM) | 0.4489 ± 0.0114 | 0.4685 ± 0.0225 | 0.5423 ± 0.0076 | 0.8041 ± 0.0378 | 0.7849 ± 0.0394 | 0.5191 ± 0.0147 | **0.4812 ± 0.0073** | 0.8204 ± 0.0070 | 0.1365 ± 0.0066 | 0.9614 ± 0.0189 | 0.5967 |
391
+ | [MLM + DAPT](https://huggingface.co/Derify/ModChemBERT-MLM-DAPT) | **0.4199 ± 0.0064** | 0.4568 ± 0.0091 | 0.5042 ± 0.0135 | 0.8376 ± 0.0629 | 0.8446 ± 0.0756 | 0.4800 ± 0.0118 | 0.5351 ± 0.0036 | 0.8191 ± 0.0066 | 0.1237 ± 0.0022 | 0.9280 ± 0.0088 | 0.5949 |
392
+ | [MLM + TAFT](https://huggingface.co/Derify/ModChemBERT-MLM-TAFT) | 0.4375 ± 0.0027 | 0.4542 ± 0.0024 | 0.5202 ± 0.0141 | **0.7618 ± 0.0138** | 0.7027 ± 0.0023 | 0.5023 ± 0.0107 | 0.5104 ± 0.0110 | 0.7599 ± 0.0050 | 0.1233 ± 0.0088 | 0.8730 ± 0.0112 | 0.5645 |
393
+ | [MLM + DAPT + TAFT](https://huggingface.co/Derify/ModChemBERT-MLM-DAPT-TAFT) | 0.4206 ± 0.0071 | **0.4400 ± 0.0039** | **0.4899 ± 0.0068** | 0.8927 ± 0.0163 | **0.6942 ± 0.0397** | 0.4641 ± 0.0082 | 0.5022 ± 0.0136 | **0.7467 ± 0.0041** | 0.1195 ± 0.0026 | **0.8564 ± 0.0265** | 0.5626 |
394
+ | [MLM + DAPT + TAFT OPT](https://huggingface.co/Derify/ModChemBERT) | 0.4248 ± 0.0041 | 0.4403 ± 0.0046 | 0.5025 ± 0.0029 | 0.8901 ± 0.0123 | 0.7268 ± 0.0090 | **0.4627 ± 0.0083** | 0.4932 ± 0.0079 | 0.7596 ± 0.0044 | **0.1150 ± 0.0002** | 0.8735 ± 0.0053 | 0.5689 |
395
+
396
+
397
  **Bold** indicates the best result in the column; *italic* indicates the best result among ModChemBERT checkpoints.<br/>
398
  \* Published results from the ChemBERTa-3 [7] paper for optimized chemical language models using DeepChem scaffold splits.<br/>
399
+ † AVG column shows the mean score across classification tasks.<br/>
400
+ ‡ AVG column shows the mean scores across regression tasks without and with the clearance score.
401
 
402
  </details>
403
 
 
437
  | esol | 64 | sum_mean | N/A | 0.1 | 0.0 | 0.1 |
438
  | freesolv | 32 | max_seq_mha | 5 | 0.1 | 0.0 | 0.0 |
439
  | lipo | 32 | max_seq_mha | 3 | 0.1 | 0.1 | 0.1 |
440
+ | antimalarial | 16 | max_seq_mha | 3 | 0.1 | 0.1 | 0.1 |
441
+ | cocrystal | 16 | max_cls | 3 | 0.1 | 0.0 | 0.1 |
442
+ | covid19 | 16 | sum_mean | N/A | 0.1 | 0.0 | 0.1 |
443
 
444
  </details>
445
 
 
473
  ```
474
 
475
  ## References
476
+ 1. Kallergis, G., Asgari, E., Empting, M. et al. Domain adaptable language modeling of chemical compounds identifies potent pathoblockers for Pseudomonas aeruginosa. Commun Chem 8, 114 (2025). https://doi.org/10.1038/s42004-025-01484-4
477
  2. Behrendt, Maike, Stefan Sylvius Wagner, and Stefan Harmeling. "MaxPoolBERT: Enhancing BERT Classification via Layer-and Token-Wise Aggregation." arXiv preprint arXiv:2505.15696 (2025).
478
  3. Sultan, Afnan, et al. "Transformers for molecular property prediction: Domain adaptation efficiently improves performance." arXiv preprint arXiv:2503.03360 (2025).
479
  4. Warner, Benjamin, et al. "Smarter, better, faster, longer: A modern bidirectional encoder for fast, memory efficient, and long context finetuning and inference." arXiv preprint arXiv:2412.13663 (2024).
480
+ 5. Clavié, Benjamin. "JaColBERTv2.5: Optimising Multi-Vector Retrievers to Create State-of-the-Art Japanese Retrievers with Constrained Resources." arXiv preprint arXiv:2407.20750 (2024).
481
  6. Grattafiori, Aaron, et al. "The llama 3 herd of models." arXiv preprint arXiv:2407.21783 (2024).
482
+ 7. Singh R, Barsainyan AA, Irfan R, Amorin CJ, He S, Davis T, et al. ChemBERTa-3: An Open Source Training Framework for Chemical Foundation Models. ChemRxiv. 2025; doi:10.26434/chemrxiv-2025-4glrl-v2 This content is a preprint and has not been peer-reviewed.
483
+ 8. Mswahili, M.E., Hwang, J., Rajapakse, J.C. et al. Positional embeddings and zero-shot learning using BERT for molecular-property prediction. J Cheminform 17, 17 (2025). https://doi.org/10.1186/s13321-025-00959-9
484
+ 9. Mswahili, M.E.; Ndomba, G.E.; Jo, K.; Jeong, Y.-S. Graph Neural Network and BERT Model for Antimalarial Drug Predictions Using Plasmodium Potential Targets. Applied Sciences, 2024, 14(4), 1472. https://doi.org/10.3390/app14041472
485
+ 10. Mswahili, M.E.; Lee, M.-J.; Martin, G.L.; Kim, J.; Kim, P.; Choi, G.J.; Jeong, Y.-S. Cocrystal Prediction Using Machine Learning Models and Descriptors. Applied Sciences, 2021, 11, 1323. https://doi.org/10.3390/app11031323
486
+ 11. Harigua-Souiai, E.; Heinhane, M.M.; Abdelkrim, Y.Z.; Souiai, O.; Abdeljaoued-Tej, I.; Guizani, I. Deep Learning Algorithms Achieved Satisfactory Predictions When Trained on a Novel Collection of Anticoronavirus Molecules. Frontiers in Genetics, 2021, 12:744170. https://doi.org/10.3389/fgene.2021.744170
487
+ 12. Cheng Fang, Ye Wang, Richard Grater, Sudarshan Kapadnis, Cheryl Black, Patrick Trapa, and Simone Sciabola. "Prospective Validation of Machine Learning Algorithms for Absorption, Distribution, Metabolism, and Excretion Prediction: An Industrial Perspective" Journal of Chemical Information and Modeling 2023 63 (11), 3263-3274 https://doi.org/10.1021/acs.jcim.3c00160
logs_modchembert_classification_ModChemBERT-MLM-DAPT-TAFT-OPT/modchembert_deepchem_splits_run_antimalarial_epochs100_batch_size16_20250926_211449.log ADDED
@@ -0,0 +1,355 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ 2025-09-26 21:14:49,298 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Running benchmark for dataset: antimalarial
2
+ 2025-09-26 21:14:49,298 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - dataset: antimalarial, tasks: ['label'], epochs: 100, learning rate: 3e-05
3
+ 2025-09-26 21:14:49,306 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Starting triplicate run 1 for dataset antimalarial at 2025-09-26_21-14-49
4
+ 2025-09-26 21:15:05,279 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 1/100 | Train Loss: 0.5719 | Val mean-roc_auc_score: 0.8113
5
+ 2025-09-26 21:15:05,279 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Global step of best model: 240
6
+ 2025-09-26 21:15:06,155 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Best model saved at epoch 1 with val mean-roc_auc_score: 0.8113
7
+ 2025-09-26 21:15:22,667 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 2/100 | Train Loss: 0.4750 | Val mean-roc_auc_score: 0.8518
8
+ 2025-09-26 21:15:22,855 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Global step of best model: 480
9
+ 2025-09-26 21:15:23,400 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Best model saved at epoch 2 with val mean-roc_auc_score: 0.8518
10
+ 2025-09-26 21:15:38,618 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 3/100 | Train Loss: 0.3750 | Val mean-roc_auc_score: 0.8883
11
+ 2025-09-26 21:15:38,805 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Global step of best model: 720
12
+ 2025-09-26 21:15:39,375 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Best model saved at epoch 3 with val mean-roc_auc_score: 0.8883
13
+ 2025-09-26 21:15:53,759 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 4/100 | Train Loss: 0.3312 | Val mean-roc_auc_score: 0.8997
14
+ 2025-09-26 21:15:53,930 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Global step of best model: 960
15
+ 2025-09-26 21:15:54,481 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Best model saved at epoch 4 with val mean-roc_auc_score: 0.8997
16
+ 2025-09-26 21:16:12,617 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 5/100 | Train Loss: 0.2838 | Val mean-roc_auc_score: 0.9087
17
+ 2025-09-26 21:16:12,810 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Global step of best model: 1200
18
+ 2025-09-26 21:16:13,532 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Best model saved at epoch 5 with val mean-roc_auc_score: 0.9087
19
+ 2025-09-26 21:16:28,195 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 6/100 | Train Loss: 0.2328 | Val mean-roc_auc_score: 0.9123
20
+ 2025-09-26 21:16:28,688 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Global step of best model: 1440
21
+ 2025-09-26 21:16:29,243 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Best model saved at epoch 6 with val mean-roc_auc_score: 0.9123
22
+ 2025-09-26 21:16:46,895 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 7/100 | Train Loss: 0.1930 | Val mean-roc_auc_score: 0.9101
23
+ 2025-09-26 21:17:01,831 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 8/100 | Train Loss: 0.1406 | Val mean-roc_auc_score: 0.9118
24
+ 2025-09-26 21:17:20,142 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 9/100 | Train Loss: 0.1135 | Val mean-roc_auc_score: 0.9155
25
+ 2025-09-26 21:17:20,297 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Global step of best model: 2160
26
+ 2025-09-26 21:17:20,858 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Best model saved at epoch 9 with val mean-roc_auc_score: 0.9155
27
+ 2025-09-26 21:17:36,115 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 10/100 | Train Loss: 0.0894 | Val mean-roc_auc_score: 0.9161
28
+ 2025-09-26 21:17:36,303 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Global step of best model: 2400
29
+ 2025-09-26 21:17:36,867 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Best model saved at epoch 10 with val mean-roc_auc_score: 0.9161
30
+ 2025-09-26 21:17:54,019 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 11/100 | Train Loss: 0.0555 | Val mean-roc_auc_score: 0.9052
31
+ 2025-09-26 21:18:10,758 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 12/100 | Train Loss: 0.0910 | Val mean-roc_auc_score: 0.9085
32
+ 2025-09-26 21:18:30,116 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 13/100 | Train Loss: 0.1094 | Val mean-roc_auc_score: 0.9071
33
+ 2025-09-26 21:18:46,872 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 14/100 | Train Loss: 0.0828 | Val mean-roc_auc_score: 0.9071
34
+ 2025-09-26 21:19:04,990 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 15/100 | Train Loss: 0.0619 | Val mean-roc_auc_score: 0.9086
35
+ 2025-09-26 21:19:20,437 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 16/100 | Train Loss: 0.0469 | Val mean-roc_auc_score: 0.9085
36
+ 2025-09-26 21:19:37,854 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 17/100 | Train Loss: 0.0398 | Val mean-roc_auc_score: 0.9138
37
+ 2025-09-26 21:19:55,732 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 18/100 | Train Loss: 0.0660 | Val mean-roc_auc_score: 0.9055
38
+ 2025-09-26 21:20:11,011 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 19/100 | Train Loss: 0.0346 | Val mean-roc_auc_score: 0.9034
39
+ 2025-09-26 21:20:29,055 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 20/100 | Train Loss: 0.0280 | Val mean-roc_auc_score: 0.9060
40
+ 2025-09-26 21:20:44,124 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 21/100 | Train Loss: 0.0391 | Val mean-roc_auc_score: 0.9085
41
+ 2025-09-26 21:21:03,651 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 22/100 | Train Loss: 0.0461 | Val mean-roc_auc_score: 0.9097
42
+ 2025-09-26 21:21:21,511 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 23/100 | Train Loss: 0.0369 | Val mean-roc_auc_score: 0.9096
43
+ 2025-09-26 21:21:36,003 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 24/100 | Train Loss: 0.0253 | Val mean-roc_auc_score: 0.9080
44
+ 2025-09-26 21:21:52,519 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 25/100 | Train Loss: 0.0317 | Val mean-roc_auc_score: 0.9096
45
+ 2025-09-26 21:22:10,222 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 26/100 | Train Loss: 0.0171 | Val mean-roc_auc_score: 0.9120
46
+ 2025-09-26 21:22:26,306 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 27/100 | Train Loss: 0.0162 | Val mean-roc_auc_score: 0.9120
47
+ 2025-09-26 21:22:44,915 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 28/100 | Train Loss: 0.0303 | Val mean-roc_auc_score: 0.9105
48
+ 2025-09-26 21:23:00,905 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 29/100 | Train Loss: 0.0158 | Val mean-roc_auc_score: 0.9097
49
+ 2025-09-26 21:23:18,492 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 30/100 | Train Loss: 0.0108 | Val mean-roc_auc_score: 0.9087
50
+ 2025-09-26 21:23:37,223 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 31/100 | Train Loss: 0.0108 | Val mean-roc_auc_score: 0.9071
51
+ 2025-09-26 21:23:53,215 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 32/100 | Train Loss: 0.0142 | Val mean-roc_auc_score: 0.9044
52
+ 2025-09-26 21:24:10,987 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 33/100 | Train Loss: 0.0132 | Val mean-roc_auc_score: 0.9099
53
+ 2025-09-26 21:24:27,957 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 34/100 | Train Loss: 0.0173 | Val mean-roc_auc_score: 0.9121
54
+ 2025-09-26 21:24:43,585 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 35/100 | Train Loss: 0.0154 | Val mean-roc_auc_score: 0.9102
55
+ 2025-09-26 21:25:01,669 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 36/100 | Train Loss: 0.0173 | Val mean-roc_auc_score: 0.9108
56
+ 2025-09-26 21:25:17,502 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 37/100 | Train Loss: 0.0146 | Val mean-roc_auc_score: 0.9113
57
+ 2025-09-26 21:25:35,677 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 38/100 | Train Loss: 0.0112 | Val mean-roc_auc_score: 0.9107
58
+ 2025-09-26 21:25:53,795 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 39/100 | Train Loss: 0.0167 | Val mean-roc_auc_score: 0.9108
59
+ 2025-09-26 21:26:07,773 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 40/100 | Train Loss: 0.0158 | Val mean-roc_auc_score: 0.9083
60
+ 2025-09-26 21:26:23,715 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 41/100 | Train Loss: 0.0200 | Val mean-roc_auc_score: 0.9083
61
+ 2025-09-26 21:26:43,973 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 42/100 | Train Loss: 0.0152 | Val mean-roc_auc_score: 0.9064
62
+ 2025-09-26 21:26:59,050 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 43/100 | Train Loss: 0.0224 | Val mean-roc_auc_score: 0.9094
63
+ 2025-09-26 21:27:15,868 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 44/100 | Train Loss: 0.0168 | Val mean-roc_auc_score: 0.9126
64
+ 2025-09-26 21:27:31,572 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 45/100 | Train Loss: 0.0197 | Val mean-roc_auc_score: 0.9079
65
+ 2025-09-26 21:27:50,156 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 46/100 | Train Loss: 0.0081 | Val mean-roc_auc_score: 0.9064
66
+ 2025-09-26 21:28:07,566 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 47/100 | Train Loss: 0.0109 | Val mean-roc_auc_score: 0.9054
67
+ 2025-09-26 21:28:22,218 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 48/100 | Train Loss: 0.0102 | Val mean-roc_auc_score: 0.9048
68
+ 2025-09-26 21:28:40,653 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 49/100 | Train Loss: 0.0132 | Val mean-roc_auc_score: 0.8964
69
+ 2025-09-26 21:28:57,107 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 50/100 | Train Loss: 0.0148 | Val mean-roc_auc_score: 0.9077
70
+ 2025-09-26 21:29:15,668 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 51/100 | Train Loss: 0.0091 | Val mean-roc_auc_score: 0.9078
71
+ 2025-09-26 21:29:33,460 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 52/100 | Train Loss: 0.0115 | Val mean-roc_auc_score: 0.9102
72
+ 2025-09-26 21:29:49,019 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 53/100 | Train Loss: 0.0038 | Val mean-roc_auc_score: 0.9094
73
+ 2025-09-26 21:30:08,177 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 54/100 | Train Loss: 0.0143 | Val mean-roc_auc_score: 0.9121
74
+ 2025-09-26 21:30:24,146 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 55/100 | Train Loss: 0.0109 | Val mean-roc_auc_score: 0.9090
75
+ 2025-09-26 21:30:42,100 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 56/100 | Train Loss: 0.0146 | Val mean-roc_auc_score: 0.9080
76
+ 2025-09-26 21:30:58,574 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 57/100 | Train Loss: 0.0059 | Val mean-roc_auc_score: 0.9075
77
+ 2025-09-26 21:31:14,530 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 58/100 | Train Loss: 0.0052 | Val mean-roc_auc_score: 0.9084
78
+ 2025-09-26 21:31:33,316 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 59/100 | Train Loss: 0.0078 | Val mean-roc_auc_score: 0.9096
79
+ 2025-09-26 21:31:49,971 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 60/100 | Train Loss: 0.0127 | Val mean-roc_auc_score: 0.9078
80
+ 2025-09-26 21:32:05,353 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 61/100 | Train Loss: 0.0064 | Val mean-roc_auc_score: 0.9085
81
+ 2025-09-26 21:32:23,282 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 62/100 | Train Loss: 0.0134 | Val mean-roc_auc_score: 0.9081
82
+ 2025-09-26 21:32:40,833 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 63/100 | Train Loss: 0.0097 | Val mean-roc_auc_score: 0.9077
83
+ 2025-09-26 21:32:59,219 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 64/100 | Train Loss: 0.0096 | Val mean-roc_auc_score: 0.9073
84
+ 2025-09-26 21:33:15,132 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 65/100 | Train Loss: 0.0068 | Val mean-roc_auc_score: 0.9079
85
+ 2025-09-26 21:33:30,817 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 66/100 | Train Loss: 0.0128 | Val mean-roc_auc_score: 0.9072
86
+ 2025-09-26 21:33:50,985 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 67/100 | Train Loss: 0.0095 | Val mean-roc_auc_score: 0.9049
87
+ 2025-09-26 21:34:07,666 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 68/100 | Train Loss: 0.0198 | Val mean-roc_auc_score: 0.9060
88
+ 2025-09-26 21:34:26,090 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 69/100 | Train Loss: 0.0162 | Val mean-roc_auc_score: 0.9088
89
+ 2025-09-26 21:34:39,764 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 70/100 | Train Loss: 0.0095 | Val mean-roc_auc_score: 0.9085
90
+ 2025-09-26 21:34:57,209 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 71/100 | Train Loss: 0.0068 | Val mean-roc_auc_score: 0.9096
91
+ 2025-09-26 21:35:16,866 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 72/100 | Train Loss: 0.0107 | Val mean-roc_auc_score: 0.9073
92
+ 2025-09-26 21:35:32,884 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 73/100 | Train Loss: 0.0070 | Val mean-roc_auc_score: 0.9079
93
+ 2025-09-26 21:35:49,406 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 74/100 | Train Loss: 0.0080 | Val mean-roc_auc_score: 0.9078
94
+ 2025-09-26 21:36:08,562 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 75/100 | Train Loss: 0.0068 | Val mean-roc_auc_score: 0.9051
95
+ 2025-09-26 21:36:25,157 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 76/100 | Train Loss: 0.0042 | Val mean-roc_auc_score: 0.9062
96
+ 2025-09-26 21:36:44,068 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 77/100 | Train Loss: 0.0076 | Val mean-roc_auc_score: 0.9093
97
+ 2025-09-26 21:37:00,214 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 78/100 | Train Loss: 0.0032 | Val mean-roc_auc_score: 0.9083
98
+ 2025-09-26 21:37:15,679 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 79/100 | Train Loss: 0.0076 | Val mean-roc_auc_score: 0.9085
99
+ 2025-09-26 21:37:35,211 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 80/100 | Train Loss: 0.0054 | Val mean-roc_auc_score: 0.9087
100
+ 2025-09-26 21:37:51,093 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 81/100 | Train Loss: 0.0056 | Val mean-roc_auc_score: 0.9094
101
+ 2025-09-26 21:38:07,775 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 82/100 | Train Loss: 0.0054 | Val mean-roc_auc_score: 0.9099
102
+ 2025-09-26 21:38:26,827 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 83/100 | Train Loss: 0.0038 | Val mean-roc_auc_score: 0.9108
103
+ 2025-09-26 21:38:44,271 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 84/100 | Train Loss: 0.0060 | Val mean-roc_auc_score: 0.9069
104
+ 2025-09-26 21:39:02,944 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 85/100 | Train Loss: 0.0068 | Val mean-roc_auc_score: 0.9081
105
+ 2025-09-26 21:39:19,514 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 86/100 | Train Loss: 0.0079 | Val mean-roc_auc_score: 0.9111
106
+ 2025-09-26 21:39:36,673 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 87/100 | Train Loss: 0.0034 | Val mean-roc_auc_score: 0.9098
107
+ 2025-09-26 21:39:55,916 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 88/100 | Train Loss: 0.0181 | Val mean-roc_auc_score: 0.9124
108
+ 2025-09-26 21:40:12,323 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 89/100 | Train Loss: 0.0072 | Val mean-roc_auc_score: 0.9117
109
+ 2025-09-26 21:40:29,010 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 90/100 | Train Loss: 0.0049 | Val mean-roc_auc_score: 0.9120
110
+ 2025-09-26 21:40:47,996 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 91/100 | Train Loss: 0.0055 | Val mean-roc_auc_score: 0.9124
111
+ 2025-09-26 21:41:06,013 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 92/100 | Train Loss: 0.0038 | Val mean-roc_auc_score: 0.9106
112
+ 2025-09-26 21:41:22,450 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 93/100 | Train Loss: 0.0091 | Val mean-roc_auc_score: 0.9109
113
+ 2025-09-26 21:41:42,324 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 94/100 | Train Loss: 0.0055 | Val mean-roc_auc_score: 0.9099
114
+ 2025-09-26 21:41:58,339 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 95/100 | Train Loss: 0.0060 | Val mean-roc_auc_score: 0.9106
115
+ 2025-09-26 21:42:16,950 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 96/100 | Train Loss: 0.0034 | Val mean-roc_auc_score: 0.9102
116
+ 2025-09-26 21:42:34,009 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 97/100 | Train Loss: 0.0069 | Val mean-roc_auc_score: 0.9103
117
+ 2025-09-26 21:42:50,043 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 98/100 | Train Loss: 0.0095 | Val mean-roc_auc_score: 0.9095
118
+ 2025-09-26 21:43:07,973 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 99/100 | Train Loss: 0.0055 | Val mean-roc_auc_score: 0.9081
119
+ 2025-09-26 21:43:25,122 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 100/100 | Train Loss: 0.0063 | Val mean-roc_auc_score: 0.9090
120
+ 2025-09-26 21:43:26,180 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Test mean-roc_auc_score: 0.8915
121
+ 2025-09-26 21:43:26,519 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Starting triplicate run 2 for dataset antimalarial at 2025-09-26_21-43-26
122
+ 2025-09-26 21:43:41,489 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 1/100 | Train Loss: 0.5437 | Val mean-roc_auc_score: 0.8256
123
+ 2025-09-26 21:43:41,490 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Global step of best model: 240
124
+ 2025-09-26 21:43:39,976 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Best model saved at epoch 1 with val mean-roc_auc_score: 0.8256
125
+ 2025-09-26 21:43:59,523 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 2/100 | Train Loss: 0.4656 | Val mean-roc_auc_score: 0.8692
126
+ 2025-09-26 21:43:59,727 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Global step of best model: 480
127
+ 2025-09-26 21:44:00,315 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Best model saved at epoch 2 with val mean-roc_auc_score: 0.8692
128
+ 2025-09-26 21:44:16,117 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 3/100 | Train Loss: 0.4062 | Val mean-roc_auc_score: 0.8950
129
+ 2025-09-26 21:44:16,322 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Global step of best model: 720
130
+ 2025-09-26 21:44:17,137 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Best model saved at epoch 3 with val mean-roc_auc_score: 0.8950
131
+ 2025-09-26 21:44:35,505 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 4/100 | Train Loss: 0.3354 | Val mean-roc_auc_score: 0.9032
132
+ 2025-09-26 21:44:35,670 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Global step of best model: 960
133
+ 2025-09-26 21:44:36,254 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Best model saved at epoch 4 with val mean-roc_auc_score: 0.9032
134
+ 2025-09-26 21:44:53,129 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 5/100 | Train Loss: 0.2725 | Val mean-roc_auc_score: 0.9070
135
+ 2025-09-26 21:44:53,334 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Global step of best model: 1200
136
+ 2025-09-26 21:44:53,921 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Best model saved at epoch 5 with val mean-roc_auc_score: 0.9070
137
+ 2025-09-26 21:45:10,655 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 6/100 | Train Loss: 0.2313 | Val mean-roc_auc_score: 0.8878
138
+ 2025-09-26 21:45:29,830 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 7/100 | Train Loss: 0.1609 | Val mean-roc_auc_score: 0.9061
139
+ 2025-09-26 21:45:45,786 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 8/100 | Train Loss: 0.1648 | Val mean-roc_auc_score: 0.9072
140
+ 2025-09-26 21:45:45,948 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Global step of best model: 1920
141
+ 2025-09-26 21:45:46,527 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Best model saved at epoch 8 with val mean-roc_auc_score: 0.9072
142
+ 2025-09-26 21:46:03,792 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 9/100 | Train Loss: 0.1328 | Val mean-roc_auc_score: 0.9047
143
+ 2025-09-26 21:46:21,810 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 10/100 | Train Loss: 0.0813 | Val mean-roc_auc_score: 0.9013
144
+ 2025-09-26 21:46:37,058 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 11/100 | Train Loss: 0.1305 | Val mean-roc_auc_score: 0.9040
145
+ 2025-09-26 21:46:55,621 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 12/100 | Train Loss: 0.0680 | Val mean-roc_auc_score: 0.9137
146
+ 2025-09-26 21:46:55,783 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Global step of best model: 2880
147
+ 2025-09-26 21:46:54,034 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Best model saved at epoch 12 with val mean-roc_auc_score: 0.9137
148
+ 2025-09-26 21:47:12,930 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 13/100 | Train Loss: 0.0793 | Val mean-roc_auc_score: 0.9022
149
+ 2025-09-26 21:47:28,271 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 14/100 | Train Loss: 0.0643 | Val mean-roc_auc_score: 0.9082
150
+ 2025-09-26 21:47:46,478 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 15/100 | Train Loss: 0.0395 | Val mean-roc_auc_score: 0.9073
151
+ 2025-09-26 21:48:02,114 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 16/100 | Train Loss: 0.0820 | Val mean-roc_auc_score: 0.9123
152
+ 2025-09-26 21:48:20,017 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 17/100 | Train Loss: 0.0301 | Val mean-roc_auc_score: 0.9121
153
+ 2025-09-26 21:48:38,500 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 18/100 | Train Loss: 0.1469 | Val mean-roc_auc_score: 0.9133
154
+ 2025-09-26 21:48:55,048 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 19/100 | Train Loss: 0.0375 | Val mean-roc_auc_score: 0.9121
155
+ 2025-09-26 21:49:12,470 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 20/100 | Train Loss: 0.0297 | Val mean-roc_auc_score: 0.9130
156
+ 2025-09-26 21:49:31,968 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 21/100 | Train Loss: 0.0175 | Val mean-roc_auc_score: 0.9154
157
+ 2025-09-26 21:49:32,481 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Global step of best model: 5040
158
+ 2025-09-26 21:49:33,082 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Best model saved at epoch 21 with val mean-roc_auc_score: 0.9154
159
+ 2025-09-26 21:49:49,615 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 22/100 | Train Loss: 0.0346 | Val mean-roc_auc_score: 0.9159
160
+ 2025-09-26 21:49:49,816 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Global step of best model: 5280
161
+ 2025-09-26 21:49:50,460 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Best model saved at epoch 22 with val mean-roc_auc_score: 0.9159
162
+ 2025-09-26 21:50:08,967 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 23/100 | Train Loss: 0.0190 | Val mean-roc_auc_score: 0.9117
163
+ 2025-09-26 21:50:24,371 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 24/100 | Train Loss: 0.0327 | Val mean-roc_auc_score: 0.9113
164
+ 2025-09-26 21:50:41,723 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 25/100 | Train Loss: 0.0262 | Val mean-roc_auc_score: 0.9093
165
+ 2025-09-26 21:51:00,692 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 26/100 | Train Loss: 0.0205 | Val mean-roc_auc_score: 0.9116
166
+ 2025-09-26 21:51:17,577 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 27/100 | Train Loss: 0.0211 | Val mean-roc_auc_score: 0.9066
167
+ 2025-09-26 21:51:34,790 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 28/100 | Train Loss: 0.0237 | Val mean-roc_auc_score: 0.9106
168
+ 2025-09-26 21:51:54,106 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 29/100 | Train Loss: 0.0120 | Val mean-roc_auc_score: 0.9121
169
+ 2025-09-26 21:52:11,224 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 30/100 | Train Loss: 0.0107 | Val mean-roc_auc_score: 0.9095
170
+ 2025-09-26 21:52:28,921 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 31/100 | Train Loss: 0.0148 | Val mean-roc_auc_score: 0.9087
171
+ 2025-09-26 21:52:48,022 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 32/100 | Train Loss: 0.0156 | Val mean-roc_auc_score: 0.9091
172
+ 2025-09-26 21:53:03,749 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 33/100 | Train Loss: 0.0241 | Val mean-roc_auc_score: 0.9104
173
+ 2025-09-26 21:53:23,342 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 34/100 | Train Loss: 0.0098 | Val mean-roc_auc_score: 0.9077
174
+ 2025-09-26 21:53:39,434 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 35/100 | Train Loss: 0.0189 | Val mean-roc_auc_score: 0.9081
175
+ 2025-09-26 21:53:56,033 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 36/100 | Train Loss: 0.0175 | Val mean-roc_auc_score: 0.9096
176
+ 2025-09-26 21:54:15,070 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 37/100 | Train Loss: 0.0233 | Val mean-roc_auc_score: 0.9100
177
+ 2025-09-26 21:54:33,637 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 38/100 | Train Loss: 0.0169 | Val mean-roc_auc_score: 0.9119
178
+ 2025-09-26 21:54:50,661 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 39/100 | Train Loss: 0.0083 | Val mean-roc_auc_score: 0.9119
179
+ 2025-09-26 21:55:09,091 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 40/100 | Train Loss: 0.0104 | Val mean-roc_auc_score: 0.9104
180
+ 2025-09-26 21:55:24,960 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 41/100 | Train Loss: 0.0208 | Val mean-roc_auc_score: 0.9106
181
+ 2025-09-26 21:55:44,811 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 42/100 | Train Loss: 0.0151 | Val mean-roc_auc_score: 0.9130
182
+ 2025-09-26 21:56:04,063 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 43/100 | Train Loss: 0.0031 | Val mean-roc_auc_score: 0.9108
183
+ 2025-09-26 21:56:20,794 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 44/100 | Train Loss: 0.0215 | Val mean-roc_auc_score: 0.9134
184
+ 2025-09-26 21:56:36,492 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 45/100 | Train Loss: 0.0109 | Val mean-roc_auc_score: 0.9132
185
+ 2025-09-26 21:56:56,619 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 46/100 | Train Loss: 0.0029 | Val mean-roc_auc_score: 0.9113
186
+ 2025-09-26 21:57:12,575 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 47/100 | Train Loss: 0.0118 | Val mean-roc_auc_score: 0.9141
187
+ 2025-09-26 21:57:32,339 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 48/100 | Train Loss: 0.0102 | Val mean-roc_auc_score: 0.9077
188
+ 2025-09-26 21:57:49,978 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 49/100 | Train Loss: 0.0102 | Val mean-roc_auc_score: 0.9090
189
+ 2025-09-26 21:58:08,309 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 50/100 | Train Loss: 0.0080 | Val mean-roc_auc_score: 0.9102
190
+ 2025-09-26 21:58:27,490 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 51/100 | Train Loss: 0.0070 | Val mean-roc_auc_score: 0.9082
191
+ 2025-09-26 21:58:45,544 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 52/100 | Train Loss: 0.0154 | Val mean-roc_auc_score: 0.9097
192
+ 2025-09-26 21:59:04,141 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 53/100 | Train Loss: 0.0190 | Val mean-roc_auc_score: 0.9099
193
+ 2025-09-26 21:59:22,092 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 54/100 | Train Loss: 0.0094 | Val mean-roc_auc_score: 0.9091
194
+ 2025-09-26 21:59:42,897 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 55/100 | Train Loss: 0.0098 | Val mean-roc_auc_score: 0.9081
195
+ 2025-09-26 22:00:01,333 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 56/100 | Train Loss: 0.0043 | Val mean-roc_auc_score: 0.9092
196
+ 2025-09-26 22:00:18,882 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 57/100 | Train Loss: 0.0053 | Val mean-roc_auc_score: 0.9120
197
+ 2025-09-26 22:00:36,609 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 58/100 | Train Loss: 0.0240 | Val mean-roc_auc_score: 0.9071
198
+ 2025-09-26 22:00:55,562 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 59/100 | Train Loss: 0.0204 | Val mean-roc_auc_score: 0.9041
199
+ 2025-09-26 22:01:14,280 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 60/100 | Train Loss: 0.0126 | Val mean-roc_auc_score: 0.9090
200
+ 2025-09-26 22:01:35,362 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 61/100 | Train Loss: 0.0130 | Val mean-roc_auc_score: 0.9083
201
+ 2025-09-26 22:01:54,372 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 62/100 | Train Loss: 0.0105 | Val mean-roc_auc_score: 0.9087
202
+ 2025-09-26 22:02:13,986 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 63/100 | Train Loss: 0.0144 | Val mean-roc_auc_score: 0.9068
203
+ 2025-09-26 22:02:34,440 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 64/100 | Train Loss: 0.0093 | Val mean-roc_auc_score: 0.9078
204
+ 2025-09-26 22:02:51,784 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 65/100 | Train Loss: 0.0077 | Val mean-roc_auc_score: 0.9073
205
+ 2025-09-26 22:03:09,324 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 66/100 | Train Loss: 0.0041 | Val mean-roc_auc_score: 0.9074
206
+ 2025-09-26 22:03:28,357 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 67/100 | Train Loss: 0.0115 | Val mean-roc_auc_score: 0.9086
207
+ 2025-09-26 22:03:43,969 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 68/100 | Train Loss: 0.0028 | Val mean-roc_auc_score: 0.9073
208
+ 2025-09-26 22:03:59,173 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 69/100 | Train Loss: 0.0124 | Val mean-roc_auc_score: 0.9076
209
+ 2025-09-26 22:04:17,198 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 70/100 | Train Loss: 0.0075 | Val mean-roc_auc_score: 0.9082
210
+ 2025-09-26 22:04:34,338 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 71/100 | Train Loss: 0.0135 | Val mean-roc_auc_score: 0.9047
211
+ 2025-09-26 22:04:53,254 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 72/100 | Train Loss: 0.0074 | Val mean-roc_auc_score: 0.9062
212
+ 2025-09-26 22:05:08,740 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 73/100 | Train Loss: 0.0093 | Val mean-roc_auc_score: 0.9076
213
+ 2025-09-26 22:05:24,675 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 74/100 | Train Loss: 0.0128 | Val mean-roc_auc_score: 0.9072
214
+ 2025-09-26 22:05:46,287 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 75/100 | Train Loss: 0.0091 | Val mean-roc_auc_score: 0.9066
215
+ 2025-09-26 22:06:03,881 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 76/100 | Train Loss: 0.0064 | Val mean-roc_auc_score: 0.9066
216
+ 2025-09-26 22:06:20,934 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 77/100 | Train Loss: 0.0107 | Val mean-roc_auc_score: 0.9063
217
+ 2025-09-26 22:06:39,217 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 78/100 | Train Loss: 0.0020 | Val mean-roc_auc_score: 0.9075
218
+ 2025-09-26 22:06:54,525 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 79/100 | Train Loss: 0.0075 | Val mean-roc_auc_score: 0.9078
219
+ 2025-09-26 22:07:13,532 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 80/100 | Train Loss: 0.0102 | Val mean-roc_auc_score: 0.9064
220
+ 2025-09-26 22:07:30,886 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 81/100 | Train Loss: 0.0075 | Val mean-roc_auc_score: 0.9062
221
+ 2025-09-26 22:07:49,517 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 82/100 | Train Loss: 0.0058 | Val mean-roc_auc_score: 0.9057
222
+ 2025-09-26 22:08:08,055 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 83/100 | Train Loss: 0.0042 | Val mean-roc_auc_score: 0.9065
223
+ 2025-09-26 22:08:27,231 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 84/100 | Train Loss: 0.0060 | Val mean-roc_auc_score: 0.9065
224
+ 2025-09-26 22:08:45,447 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 85/100 | Train Loss: 0.0060 | Val mean-roc_auc_score: 0.9062
225
+ 2025-09-26 22:09:01,919 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 86/100 | Train Loss: 0.0048 | Val mean-roc_auc_score: 0.9063
226
+ 2025-09-26 22:09:21,272 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 87/100 | Train Loss: 0.0084 | Val mean-roc_auc_score: 0.9071
227
+ 2025-09-26 22:09:38,992 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 88/100 | Train Loss: 0.0039 | Val mean-roc_auc_score: 0.9070
228
+ 2025-09-26 22:09:58,545 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 89/100 | Train Loss: 0.0061 | Val mean-roc_auc_score: 0.9068
229
+ 2025-09-26 22:10:14,911 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 90/100 | Train Loss: 0.0062 | Val mean-roc_auc_score: 0.9073
230
+ 2025-09-26 22:10:30,771 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 91/100 | Train Loss: 0.0137 | Val mean-roc_auc_score: 0.9068
231
+ 2025-09-26 22:10:50,741 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 92/100 | Train Loss: 0.0058 | Val mean-roc_auc_score: 0.9077
232
+ 2025-09-26 22:11:06,466 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 93/100 | Train Loss: 0.0064 | Val mean-roc_auc_score: 0.9081
233
+ 2025-09-26 22:11:22,098 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 94/100 | Train Loss: 0.0038 | Val mean-roc_auc_score: 0.9067
234
+ 2025-09-26 22:11:40,495 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 95/100 | Train Loss: 0.0034 | Val mean-roc_auc_score: 0.9070
235
+ 2025-09-26 22:11:57,764 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 96/100 | Train Loss: 0.0073 | Val mean-roc_auc_score: 0.9086
236
+ 2025-09-26 22:12:16,522 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 97/100 | Train Loss: 0.0062 | Val mean-roc_auc_score: 0.9087
237
+ 2025-09-26 22:12:33,820 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 98/100 | Train Loss: 0.0043 | Val mean-roc_auc_score: 0.9084
238
+ 2025-09-26 22:12:50,759 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 99/100 | Train Loss: 0.0044 | Val mean-roc_auc_score: 0.9072
239
+ 2025-09-26 22:13:11,338 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 100/100 | Train Loss: 0.0052 | Val mean-roc_auc_score: 0.9074
240
+ 2025-09-26 22:13:12,363 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Test mean-roc_auc_score: 0.8959
241
+ 2025-09-26 22:13:11,958 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Starting triplicate run 3 for dataset antimalarial at 2025-09-26_22-13-11
242
+ 2025-09-26 22:13:26,031 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 1/100 | Train Loss: 0.5250 | Val mean-roc_auc_score: 0.8265
243
+ 2025-09-26 22:13:26,031 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Global step of best model: 240
244
+ 2025-09-26 22:13:26,857 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Best model saved at epoch 1 with val mean-roc_auc_score: 0.8265
245
+ 2025-09-26 22:13:44,062 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 2/100 | Train Loss: 0.4969 | Val mean-roc_auc_score: 0.8535
246
+ 2025-09-26 22:13:44,224 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Global step of best model: 480
247
+ 2025-09-26 22:13:44,842 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Best model saved at epoch 2 with val mean-roc_auc_score: 0.8535
248
+ 2025-09-26 22:14:05,545 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 3/100 | Train Loss: 0.3656 | Val mean-roc_auc_score: 0.8932
249
+ 2025-09-26 22:14:05,771 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Global step of best model: 720
250
+ 2025-09-26 22:14:06,420 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Best model saved at epoch 3 with val mean-roc_auc_score: 0.8932
251
+ 2025-09-26 22:14:23,298 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 4/100 | Train Loss: 0.3438 | Val mean-roc_auc_score: 0.9029
252
+ 2025-09-26 22:14:23,504 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Global step of best model: 960
253
+ 2025-09-26 22:14:24,125 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Best model saved at epoch 4 with val mean-roc_auc_score: 0.9029
254
+ 2025-09-26 22:14:41,217 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 5/100 | Train Loss: 0.2313 | Val mean-roc_auc_score: 0.9018
255
+ 2025-09-26 22:14:59,522 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 6/100 | Train Loss: 0.2172 | Val mean-roc_auc_score: 0.9013
256
+ 2025-09-26 22:15:15,503 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 7/100 | Train Loss: 0.1516 | Val mean-roc_auc_score: 0.9020
257
+ 2025-09-26 22:15:31,331 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 8/100 | Train Loss: 0.1148 | Val mean-roc_auc_score: 0.8987
258
+ 2025-09-26 22:15:50,463 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 9/100 | Train Loss: 0.1036 | Val mean-roc_auc_score: 0.9089
259
+ 2025-09-26 22:15:50,638 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Global step of best model: 2160
260
+ 2025-09-26 22:15:51,313 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Best model saved at epoch 9 with val mean-roc_auc_score: 0.9089
261
+ 2025-09-26 22:16:07,132 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 10/100 | Train Loss: 0.0950 | Val mean-roc_auc_score: 0.9058
262
+ 2025-09-26 22:16:25,260 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 11/100 | Train Loss: 0.1484 | Val mean-roc_auc_score: 0.8993
263
+ 2025-09-26 22:16:42,423 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 12/100 | Train Loss: 0.0777 | Val mean-roc_auc_score: 0.9075
264
+ 2025-09-26 22:17:00,517 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 13/100 | Train Loss: 0.0396 | Val mean-roc_auc_score: 0.9050
265
+ 2025-09-26 22:17:19,189 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 14/100 | Train Loss: 0.0576 | Val mean-roc_auc_score: 0.9040
266
+ 2025-09-26 22:17:35,344 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 15/100 | Train Loss: 0.0350 | Val mean-roc_auc_score: 0.9067
267
+ 2025-09-26 22:17:51,329 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 16/100 | Train Loss: 0.0582 | Val mean-roc_auc_score: 0.9023
268
+ 2025-09-26 22:18:12,399 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 17/100 | Train Loss: 0.0660 | Val mean-roc_auc_score: 0.9024
269
+ 2025-09-26 22:18:29,007 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 18/100 | Train Loss: 0.0245 | Val mean-roc_auc_score: 0.9039
270
+ 2025-09-26 22:18:46,012 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 19/100 | Train Loss: 0.0542 | Val mean-roc_auc_score: 0.8980
271
+ 2025-09-26 22:19:06,043 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 20/100 | Train Loss: 0.0359 | Val mean-roc_auc_score: 0.9068
272
+ 2025-09-26 22:19:24,169 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 21/100 | Train Loss: 0.0170 | Val mean-roc_auc_score: 0.9062
273
+ 2025-09-26 22:19:40,011 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 22/100 | Train Loss: 0.0637 | Val mean-roc_auc_score: 0.9074
274
+ 2025-09-26 22:19:58,616 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 23/100 | Train Loss: 0.0318 | Val mean-roc_auc_score: 0.9088
275
+ 2025-09-26 22:20:14,382 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 24/100 | Train Loss: 0.0243 | Val mean-roc_auc_score: 0.9142
276
+ 2025-09-26 22:20:14,562 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Global step of best model: 5760
277
+ 2025-09-26 22:20:15,168 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Best model saved at epoch 24 with val mean-roc_auc_score: 0.9142
278
+ 2025-09-26 22:20:32,400 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 25/100 | Train Loss: 0.0330 | Val mean-roc_auc_score: 0.9043
279
+ 2025-09-26 22:20:50,550 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 26/100 | Train Loss: 0.0156 | Val mean-roc_auc_score: 0.9020
280
+ 2025-09-26 22:21:07,640 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 27/100 | Train Loss: 0.0244 | Val mean-roc_auc_score: 0.9044
281
+ 2025-09-26 22:21:26,400 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 28/100 | Train Loss: 0.0137 | Val mean-roc_auc_score: 0.9065
282
+ 2025-09-26 22:21:42,671 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 29/100 | Train Loss: 0.0133 | Val mean-roc_auc_score: 0.9078
283
+ 2025-09-26 22:21:58,656 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 30/100 | Train Loss: 0.0208 | Val mean-roc_auc_score: 0.9034
284
+ 2025-09-26 22:22:15,788 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 31/100 | Train Loss: 0.0271 | Val mean-roc_auc_score: 0.9043
285
+ 2025-09-26 22:22:31,709 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 32/100 | Train Loss: 0.0504 | Val mean-roc_auc_score: 0.9029
286
+ 2025-09-26 22:22:49,301 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 33/100 | Train Loss: 0.0426 | Val mean-roc_auc_score: 0.9042
287
+ 2025-09-26 22:23:07,379 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 34/100 | Train Loss: 0.0185 | Val mean-roc_auc_score: 0.9061
288
+ 2025-09-26 22:23:22,996 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 35/100 | Train Loss: 0.0084 | Val mean-roc_auc_score: 0.9058
289
+ 2025-09-26 22:23:42,032 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 36/100 | Train Loss: 0.0121 | Val mean-roc_auc_score: 0.9057
290
+ 2025-09-26 22:23:58,913 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 37/100 | Train Loss: 0.0106 | Val mean-roc_auc_score: 0.9079
291
+ 2025-09-26 22:24:14,938 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 38/100 | Train Loss: 0.0094 | Val mean-roc_auc_score: 0.9075
292
+ 2025-09-26 22:24:33,394 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 39/100 | Train Loss: 0.0072 | Val mean-roc_auc_score: 0.9067
293
+ 2025-09-26 22:24:48,862 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 40/100 | Train Loss: 0.0146 | Val mean-roc_auc_score: 0.9060
294
+ 2025-09-26 22:25:07,794 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 41/100 | Train Loss: 0.0116 | Val mean-roc_auc_score: 0.9085
295
+ 2025-09-26 22:25:25,098 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 42/100 | Train Loss: 0.0096 | Val mean-roc_auc_score: 0.9061
296
+ 2025-09-26 22:25:40,226 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 43/100 | Train Loss: 0.0301 | Val mean-roc_auc_score: 0.9014
297
+ 2025-09-26 22:25:57,460 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 44/100 | Train Loss: 0.0133 | Val mean-roc_auc_score: 0.9061
298
+ 2025-09-26 22:26:12,615 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 45/100 | Train Loss: 0.0163 | Val mean-roc_auc_score: 0.9067
299
+ 2025-09-26 22:26:31,195 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 46/100 | Train Loss: 0.0070 | Val mean-roc_auc_score: 0.9064
300
+ 2025-09-26 22:26:51,015 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 47/100 | Train Loss: 0.0174 | Val mean-roc_auc_score: 0.9074
301
+ 2025-09-26 22:27:08,180 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 48/100 | Train Loss: 0.0244 | Val mean-roc_auc_score: 0.9078
302
+ 2025-09-26 22:27:27,615 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 49/100 | Train Loss: 0.0124 | Val mean-roc_auc_score: 0.9086
303
+ 2025-09-26 22:27:47,452 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 50/100 | Train Loss: 0.0113 | Val mean-roc_auc_score: 0.9088
304
+ 2025-09-26 22:28:03,156 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 51/100 | Train Loss: 0.0094 | Val mean-roc_auc_score: 0.9077
305
+ 2025-09-26 22:28:22,777 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 52/100 | Train Loss: 0.0146 | Val mean-roc_auc_score: 0.9065
306
+ 2025-09-26 22:28:38,358 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 53/100 | Train Loss: 0.0128 | Val mean-roc_auc_score: 0.9070
307
+ 2025-09-26 22:28:53,906 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 54/100 | Train Loss: 0.0102 | Val mean-roc_auc_score: 0.9071
308
+ 2025-09-26 22:29:13,580 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 55/100 | Train Loss: 0.0073 | Val mean-roc_auc_score: 0.9066
309
+ 2025-09-26 22:29:29,041 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 56/100 | Train Loss: 0.0135 | Val mean-roc_auc_score: 0.9074
310
+ 2025-09-26 22:29:45,811 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 57/100 | Train Loss: 0.0056 | Val mean-roc_auc_score: 0.9082
311
+ 2025-09-26 22:30:05,632 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 58/100 | Train Loss: 0.0022 | Val mean-roc_auc_score: 0.9064
312
+ 2025-09-26 22:30:23,682 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 59/100 | Train Loss: 0.0160 | Val mean-roc_auc_score: 0.9070
313
+ 2025-09-26 22:30:40,091 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 60/100 | Train Loss: 0.0102 | Val mean-roc_auc_score: 0.9063
314
+ 2025-09-26 22:30:58,402 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 61/100 | Train Loss: 0.0174 | Val mean-roc_auc_score: 0.9067
315
+ 2025-09-26 22:31:14,154 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 62/100 | Train Loss: 0.0058 | Val mean-roc_auc_score: 0.9064
316
+ 2025-09-26 22:31:33,573 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 63/100 | Train Loss: 0.0079 | Val mean-roc_auc_score: 0.9053
317
+ 2025-09-26 22:31:50,055 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 64/100 | Train Loss: 0.0066 | Val mean-roc_auc_score: 0.9055
318
+ 2025-09-26 22:32:05,346 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 65/100 | Train Loss: 0.0056 | Val mean-roc_auc_score: 0.9058
319
+ 2025-09-26 22:32:24,227 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 66/100 | Train Loss: 0.0159 | Val mean-roc_auc_score: 0.9020
320
+ 2025-09-26 22:32:40,837 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 67/100 | Train Loss: 0.0052 | Val mean-roc_auc_score: 0.9069
321
+ 2025-09-26 22:32:56,896 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 68/100 | Train Loss: 0.0041 | Val mean-roc_auc_score: 0.9070
322
+ 2025-09-26 22:33:18,170 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 69/100 | Train Loss: 0.0071 | Val mean-roc_auc_score: 0.9074
323
+ 2025-09-26 22:33:33,669 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 70/100 | Train Loss: 0.0067 | Val mean-roc_auc_score: 0.9085
324
+ 2025-09-26 22:33:52,709 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 71/100 | Train Loss: 0.0043 | Val mean-roc_auc_score: 0.9074
325
+ 2025-09-26 22:34:08,592 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 72/100 | Train Loss: 0.0119 | Val mean-roc_auc_score: 0.9046
326
+ 2025-09-26 22:34:24,603 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 73/100 | Train Loss: 0.0040 | Val mean-roc_auc_score: 0.9053
327
+ 2025-09-26 22:34:43,173 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 74/100 | Train Loss: 0.0072 | Val mean-roc_auc_score: 0.9051
328
+ 2025-09-26 22:35:00,167 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 75/100 | Train Loss: 0.0069 | Val mean-roc_auc_score: 0.9050
329
+ 2025-09-26 22:35:16,388 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 76/100 | Train Loss: 0.0110 | Val mean-roc_auc_score: 0.9052
330
+ 2025-09-26 22:35:35,874 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 77/100 | Train Loss: 0.0052 | Val mean-roc_auc_score: 0.9054
331
+ 2025-09-26 22:35:51,948 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 78/100 | Train Loss: 0.0040 | Val mean-roc_auc_score: 0.9049
332
+ 2025-09-26 22:36:10,327 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 79/100 | Train Loss: 0.0038 | Val mean-roc_auc_score: 0.9053
333
+ 2025-09-26 22:36:27,039 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 80/100 | Train Loss: 0.0073 | Val mean-roc_auc_score: 0.9060
334
+ 2025-09-26 22:36:41,993 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 81/100 | Train Loss: 0.0110 | Val mean-roc_auc_score: 0.9092
335
+ 2025-09-26 22:37:00,972 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 82/100 | Train Loss: 0.0089 | Val mean-roc_auc_score: 0.9092
336
+ 2025-09-26 22:37:15,460 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 83/100 | Train Loss: 0.0096 | Val mean-roc_auc_score: 0.9094
337
+ 2025-09-26 22:37:32,670 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 84/100 | Train Loss: 0.0055 | Val mean-roc_auc_score: 0.9096
338
+ 2025-09-26 22:37:45,814 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 85/100 | Train Loss: 0.0049 | Val mean-roc_auc_score: 0.9099
339
+ 2025-09-26 22:38:00,480 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 86/100 | Train Loss: 0.0046 | Val mean-roc_auc_score: 0.9096
340
+ 2025-09-26 22:38:16,513 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 87/100 | Train Loss: 0.0078 | Val mean-roc_auc_score: 0.9097
341
+ 2025-09-26 22:38:30,188 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 88/100 | Train Loss: 0.0054 | Val mean-roc_auc_score: 0.9097
342
+ 2025-09-26 22:38:45,801 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 89/100 | Train Loss: 0.0044 | Val mean-roc_auc_score: 0.9094
343
+ 2025-09-26 22:38:59,347 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 90/100 | Train Loss: 0.0037 | Val mean-roc_auc_score: 0.9097
344
+ 2025-09-26 22:39:14,723 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 91/100 | Train Loss: 0.0065 | Val mean-roc_auc_score: 0.9087
345
+ 2025-09-26 22:39:29,705 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 92/100 | Train Loss: 0.0057 | Val mean-roc_auc_score: 0.9099
346
+ 2025-09-26 22:39:45,639 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 93/100 | Train Loss: 0.0159 | Val mean-roc_auc_score: 0.9109
347
+ 2025-09-26 22:39:59,082 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 94/100 | Train Loss: 0.0120 | Val mean-roc_auc_score: 0.9089
348
+ 2025-09-26 22:40:14,445 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 95/100 | Train Loss: 0.0066 | Val mean-roc_auc_score: 0.9089
349
+ 2025-09-26 22:40:28,089 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 96/100 | Train Loss: 0.0078 | Val mean-roc_auc_score: 0.9102
350
+ 2025-09-26 22:40:44,309 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 97/100 | Train Loss: 0.0050 | Val mean-roc_auc_score: 0.9105
351
+ 2025-09-26 22:40:57,583 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 98/100 | Train Loss: 0.0019 | Val mean-roc_auc_score: 0.9104
352
+ 2025-09-26 22:41:13,404 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 99/100 | Train Loss: 0.0046 | Val mean-roc_auc_score: 0.9098
353
+ 2025-09-26 22:41:28,258 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Epoch 100/100 | Train Loss: 0.0050 | Val mean-roc_auc_score: 0.9092
354
+ 2025-09-26 22:41:29,230 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Test mean-roc_auc_score: 0.9023
355
+ 2025-09-26 22:41:29,686 - logs_modchembert_antimalarial_epochs100_batch_size16 - INFO - Final Triplicate Test Results — Avg mean-roc_auc_score: 0.8966, Std Dev: 0.0045
logs_modchembert_classification_ModChemBERT-MLM-DAPT-TAFT-OPT/modchembert_deepchem_splits_run_cocrystal_epochs100_batch_size32_20250927_065415.log ADDED
@@ -0,0 +1,343 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ 2025-09-27 06:54:15,854 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Running benchmark for dataset: cocrystal
2
+ 2025-09-27 06:54:15,855 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - dataset: cocrystal, tasks: ['label'], epochs: 100, learning rate: 3e-05
3
+ 2025-09-27 06:54:15,863 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Starting triplicate run 1 for dataset cocrystal at 2025-09-27_06-54-15
4
+ 2025-09-27 06:54:22,055 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 1/100 | Train Loss: 0.6453 | Val mean-roc_auc_score: 0.7433
5
+ 2025-09-27 06:54:22,056 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Global step of best model: 37
6
+ 2025-09-27 06:54:22,648 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Best model saved at epoch 1 with val mean-roc_auc_score: 0.7433
7
+ 2025-09-27 06:54:26,889 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 2/100 | Train Loss: 0.4932 | Val mean-roc_auc_score: 0.8314
8
+ 2025-09-27 06:54:27,083 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Global step of best model: 74
9
+ 2025-09-27 06:54:27,822 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Best model saved at epoch 2 with val mean-roc_auc_score: 0.8314
10
+ 2025-09-27 06:54:32,420 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 3/100 | Train Loss: 0.4517 | Val mean-roc_auc_score: 0.8644
11
+ 2025-09-27 06:54:32,619 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Global step of best model: 111
12
+ 2025-09-27 06:54:33,253 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Best model saved at epoch 3 with val mean-roc_auc_score: 0.8644
13
+ 2025-09-27 06:54:35,011 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 4/100 | Train Loss: 0.4122 | Val mean-roc_auc_score: 0.8396
14
+ 2025-09-27 06:54:39,066 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 5/100 | Train Loss: 0.3851 | Val mean-roc_auc_score: 0.8421
15
+ 2025-09-27 06:54:43,499 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 6/100 | Train Loss: 0.3523 | Val mean-roc_auc_score: 0.8570
16
+ 2025-09-27 06:54:48,810 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 7/100 | Train Loss: 0.3209 | Val mean-roc_auc_score: 0.8475
17
+ 2025-09-27 06:54:53,712 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 8/100 | Train Loss: 0.3007 | Val mean-roc_auc_score: 0.8586
18
+ 2025-09-27 06:54:58,553 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 9/100 | Train Loss: 0.3220 | Val mean-roc_auc_score: 0.8615
19
+ 2025-09-27 06:55:03,090 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 10/100 | Train Loss: 0.2652 | Val mean-roc_auc_score: 0.8694
20
+ 2025-09-27 06:55:03,419 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Global step of best model: 370
21
+ 2025-09-27 06:55:04,039 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Best model saved at epoch 10 with val mean-roc_auc_score: 0.8694
22
+ 2025-09-27 06:55:05,706 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 11/100 | Train Loss: 0.3237 | Val mean-roc_auc_score: 0.8540
23
+ 2025-09-27 06:55:10,497 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 12/100 | Train Loss: 0.2399 | Val mean-roc_auc_score: 0.8344
24
+ 2025-09-27 06:55:14,732 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 13/100 | Train Loss: 0.1959 | Val mean-roc_auc_score: 0.8664
25
+ 2025-09-27 06:55:18,847 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 14/100 | Train Loss: 0.2344 | Val mean-roc_auc_score: 0.8382
26
+ 2025-09-27 06:55:23,289 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 15/100 | Train Loss: 0.1833 | Val mean-roc_auc_score: 0.8343
27
+ 2025-09-27 06:55:27,562 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 16/100 | Train Loss: 0.1993 | Val mean-roc_auc_score: 0.8250
28
+ 2025-09-27 06:55:29,503 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 17/100 | Train Loss: 0.1638 | Val mean-roc_auc_score: 0.8310
29
+ 2025-09-27 06:55:33,924 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 18/100 | Train Loss: 0.1748 | Val mean-roc_auc_score: 0.8589
30
+ 2025-09-27 06:55:38,189 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 19/100 | Train Loss: 0.1497 | Val mean-roc_auc_score: 0.8521
31
+ 2025-09-27 06:55:42,246 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 20/100 | Train Loss: 0.1115 | Val mean-roc_auc_score: 0.8619
32
+ 2025-09-27 06:55:46,455 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 21/100 | Train Loss: 0.1351 | Val mean-roc_auc_score: 0.8530
33
+ 2025-09-27 06:55:50,919 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 22/100 | Train Loss: 0.1138 | Val mean-roc_auc_score: 0.8221
34
+ 2025-09-27 06:55:55,135 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 23/100 | Train Loss: 0.1073 | Val mean-roc_auc_score: 0.8355
35
+ 2025-09-27 06:55:56,615 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 24/100 | Train Loss: 0.1529 | Val mean-roc_auc_score: 0.8461
36
+ 2025-09-27 06:56:00,738 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 25/100 | Train Loss: 0.0988 | Val mean-roc_auc_score: 0.8441
37
+ 2025-09-27 06:56:04,769 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 26/100 | Train Loss: 0.0557 | Val mean-roc_auc_score: 0.8177
38
+ 2025-09-27 06:56:10,051 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 27/100 | Train Loss: 0.0424 | Val mean-roc_auc_score: 0.8081
39
+ 2025-09-27 06:56:14,091 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 28/100 | Train Loss: 0.0423 | Val mean-roc_auc_score: 0.7998
40
+ 2025-09-27 06:56:18,148 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 29/100 | Train Loss: 0.1394 | Val mean-roc_auc_score: 0.8586
41
+ 2025-09-27 06:56:22,282 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 30/100 | Train Loss: 0.0754 | Val mean-roc_auc_score: 0.8534
42
+ 2025-09-27 06:56:23,607 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 31/100 | Train Loss: 0.0448 | Val mean-roc_auc_score: 0.8324
43
+ 2025-09-27 06:56:27,928 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 32/100 | Train Loss: 0.0306 | Val mean-roc_auc_score: 0.8066
44
+ 2025-09-27 06:56:32,041 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 33/100 | Train Loss: 0.0246 | Val mean-roc_auc_score: 0.8011
45
+ 2025-09-27 06:56:36,129 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 34/100 | Train Loss: 0.0195 | Val mean-roc_auc_score: 0.7946
46
+ 2025-09-27 06:56:40,253 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 35/100 | Train Loss: 0.0230 | Val mean-roc_auc_score: 0.7873
47
+ 2025-09-27 06:56:44,386 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 36/100 | Train Loss: 0.0287 | Val mean-roc_auc_score: 0.8013
48
+ 2025-09-27 06:56:48,760 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 37/100 | Train Loss: 0.0397 | Val mean-roc_auc_score: 0.8245
49
+ 2025-09-27 06:56:52,898 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 38/100 | Train Loss: 0.0273 | Val mean-roc_auc_score: 0.7877
50
+ 2025-09-27 06:56:54,273 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 39/100 | Train Loss: 0.0684 | Val mean-roc_auc_score: 0.8251
51
+ 2025-09-27 06:56:58,398 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 40/100 | Train Loss: 0.0443 | Val mean-roc_auc_score: 0.8114
52
+ 2025-09-27 06:57:02,562 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 41/100 | Train Loss: 0.0312 | Val mean-roc_auc_score: 0.8268
53
+ 2025-09-27 06:57:06,914 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 42/100 | Train Loss: 0.0446 | Val mean-roc_auc_score: 0.7948
54
+ 2025-09-27 06:57:11,111 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 43/100 | Train Loss: 0.0452 | Val mean-roc_auc_score: 0.8148
55
+ 2025-09-27 06:57:15,289 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 44/100 | Train Loss: 0.0286 | Val mean-roc_auc_score: 0.7965
56
+ 2025-09-27 06:57:19,392 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 45/100 | Train Loss: 0.0304 | Val mean-roc_auc_score: 0.7858
57
+ 2025-09-27 06:57:21,234 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 46/100 | Train Loss: 0.0062 | Val mean-roc_auc_score: 0.7951
58
+ 2025-09-27 06:57:26,374 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 47/100 | Train Loss: 0.0196 | Val mean-roc_auc_score: 0.7885
59
+ 2025-09-27 06:57:31,032 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 48/100 | Train Loss: 0.0146 | Val mean-roc_auc_score: 0.7913
60
+ 2025-09-27 06:57:35,685 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 49/100 | Train Loss: 0.0153 | Val mean-roc_auc_score: 0.7861
61
+ 2025-09-27 06:57:40,446 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 50/100 | Train Loss: 0.0108 | Val mean-roc_auc_score: 0.7887
62
+ 2025-09-27 06:57:44,980 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 51/100 | Train Loss: 0.0067 | Val mean-roc_auc_score: 0.7867
63
+ 2025-09-27 06:57:47,102 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 52/100 | Train Loss: 0.0046 | Val mean-roc_auc_score: 0.7833
64
+ 2025-09-27 06:57:51,614 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 53/100 | Train Loss: 0.0055 | Val mean-roc_auc_score: 0.7918
65
+ 2025-09-27 06:57:56,221 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 54/100 | Train Loss: 0.0067 | Val mean-roc_auc_score: 0.7893
66
+ 2025-09-27 06:58:01,810 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 55/100 | Train Loss: 0.0049 | Val mean-roc_auc_score: 0.7919
67
+ 2025-09-27 06:58:06,396 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 56/100 | Train Loss: 0.0037 | Val mean-roc_auc_score: 0.7868
68
+ 2025-09-27 06:58:11,712 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 57/100 | Train Loss: 0.0058 | Val mean-roc_auc_score: 0.7941
69
+ 2025-09-27 06:58:13,999 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 58/100 | Train Loss: 0.0043 | Val mean-roc_auc_score: 0.7898
70
+ 2025-09-27 06:58:18,895 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 59/100 | Train Loss: 0.0052 | Val mean-roc_auc_score: 0.7954
71
+ 2025-09-27 06:58:23,857 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 60/100 | Train Loss: 0.0035 | Val mean-roc_auc_score: 0.7904
72
+ 2025-09-27 06:58:28,545 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 61/100 | Train Loss: 0.0039 | Val mean-roc_auc_score: 0.7889
73
+ 2025-09-27 06:58:33,765 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 62/100 | Train Loss: 0.0106 | Val mean-roc_auc_score: 0.7836
74
+ 2025-09-27 06:58:39,041 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 63/100 | Train Loss: 0.0058 | Val mean-roc_auc_score: 0.7934
75
+ 2025-09-27 06:58:41,284 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 64/100 | Train Loss: 0.0274 | Val mean-roc_auc_score: 0.7789
76
+ 2025-09-27 06:58:46,254 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 65/100 | Train Loss: 0.0262 | Val mean-roc_auc_score: 0.7938
77
+ 2025-09-27 06:58:51,085 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 66/100 | Train Loss: 0.0132 | Val mean-roc_auc_score: 0.7905
78
+ 2025-09-27 06:58:56,432 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 67/100 | Train Loss: 0.0115 | Val mean-roc_auc_score: 0.7943
79
+ 2025-09-27 06:59:01,209 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 68/100 | Train Loss: 0.0048 | Val mean-roc_auc_score: 0.7843
80
+ 2025-09-27 06:59:06,500 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 69/100 | Train Loss: 0.0068 | Val mean-roc_auc_score: 0.7884
81
+ 2025-09-27 06:59:08,537 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 70/100 | Train Loss: 0.0038 | Val mean-roc_auc_score: 0.7912
82
+ 2025-09-27 06:59:13,557 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 71/100 | Train Loss: 0.0038 | Val mean-roc_auc_score: 0.7926
83
+ 2025-09-27 06:59:18,708 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 72/100 | Train Loss: 0.0026 | Val mean-roc_auc_score: 0.7918
84
+ 2025-09-27 06:59:23,698 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 73/100 | Train Loss: 0.0918 | Val mean-roc_auc_score: 0.7929
85
+ 2025-09-27 06:59:28,490 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 74/100 | Train Loss: 0.0168 | Val mean-roc_auc_score: 0.7772
86
+ 2025-09-27 06:59:33,009 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 75/100 | Train Loss: 0.0076 | Val mean-roc_auc_score: 0.7920
87
+ 2025-09-27 06:59:34,918 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 76/100 | Train Loss: 0.0010 | Val mean-roc_auc_score: 0.7923
88
+ 2025-09-27 06:59:39,826 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 77/100 | Train Loss: 0.0024 | Val mean-roc_auc_score: 0.7924
89
+ 2025-09-27 06:59:44,678 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 78/100 | Train Loss: 0.0019 | Val mean-roc_auc_score: 0.7925
90
+ 2025-09-27 06:59:49,307 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 79/100 | Train Loss: 0.0020 | Val mean-roc_auc_score: 0.7925
91
+ 2025-09-27 06:59:53,927 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 80/100 | Train Loss: 0.0025 | Val mean-roc_auc_score: 0.7932
92
+ 2025-09-27 06:59:58,397 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 81/100 | Train Loss: 0.0029 | Val mean-roc_auc_score: 0.7902
93
+ 2025-09-27 07:00:01,952 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 82/100 | Train Loss: 0.0019 | Val mean-roc_auc_score: 0.7916
94
+ 2025-09-27 07:00:06,427 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 83/100 | Train Loss: 0.0023 | Val mean-roc_auc_score: 0.7924
95
+ 2025-09-27 07:00:11,036 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 84/100 | Train Loss: 0.0019 | Val mean-roc_auc_score: 0.7923
96
+ 2025-09-27 07:00:15,555 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 85/100 | Train Loss: 0.0017 | Val mean-roc_auc_score: 0.7921
97
+ 2025-09-27 07:00:20,194 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 86/100 | Train Loss: 0.0018 | Val mean-roc_auc_score: 0.7925
98
+ 2025-09-27 07:00:24,991 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 87/100 | Train Loss: 0.0023 | Val mean-roc_auc_score: 0.7904
99
+ 2025-09-27 07:00:29,500 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 88/100 | Train Loss: 0.0036 | Val mean-roc_auc_score: 0.7908
100
+ 2025-09-27 07:00:31,287 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 89/100 | Train Loss: 0.0248 | Val mean-roc_auc_score: 0.7695
101
+ 2025-09-27 07:00:35,698 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 90/100 | Train Loss: 0.0135 | Val mean-roc_auc_score: 0.8000
102
+ 2025-09-27 07:00:40,203 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 91/100 | Train Loss: 0.0076 | Val mean-roc_auc_score: 0.8045
103
+ 2025-09-27 07:00:45,066 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 92/100 | Train Loss: 0.0017 | Val mean-roc_auc_score: 0.7978
104
+ 2025-09-27 07:00:49,642 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 93/100 | Train Loss: 0.0029 | Val mean-roc_auc_score: 0.7910
105
+ 2025-09-27 07:00:54,237 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 94/100 | Train Loss: 0.0018 | Val mean-roc_auc_score: 0.7898
106
+ 2025-09-27 07:00:58,454 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 95/100 | Train Loss: 0.0023 | Val mean-roc_auc_score: 0.7901
107
+ 2025-09-27 07:01:00,608 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 96/100 | Train Loss: 0.0026 | Val mean-roc_auc_score: 0.7905
108
+ 2025-09-27 07:01:05,470 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 97/100 | Train Loss: 0.0014 | Val mean-roc_auc_score: 0.7897
109
+ 2025-09-27 07:01:10,037 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 98/100 | Train Loss: 0.0024 | Val mean-roc_auc_score: 0.7898
110
+ 2025-09-27 07:01:14,582 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 99/100 | Train Loss: 0.0016 | Val mean-roc_auc_score: 0.7905
111
+ 2025-09-27 07:01:19,193 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 100/100 | Train Loss: 0.0021 | Val mean-roc_auc_score: 0.7898
112
+ 2025-09-27 07:01:19,604 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Test mean-roc_auc_score: 0.8597
113
+ 2025-09-27 07:01:19,937 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Starting triplicate run 2 for dataset cocrystal at 2025-09-27_07-01-19
114
+ 2025-09-27 07:01:24,254 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 1/100 | Train Loss: 0.5980 | Val mean-roc_auc_score: 0.8273
115
+ 2025-09-27 07:01:24,254 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Global step of best model: 37
116
+ 2025-09-27 07:01:24,992 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Best model saved at epoch 1 with val mean-roc_auc_score: 0.8273
117
+ 2025-09-27 07:01:27,004 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 2/100 | Train Loss: 0.4696 | Val mean-roc_auc_score: 0.8213
118
+ 2025-09-27 07:01:31,537 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 3/100 | Train Loss: 0.4119 | Val mean-roc_auc_score: 0.8505
119
+ 2025-09-27 07:01:31,729 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Global step of best model: 111
120
+ 2025-09-27 07:01:32,332 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Best model saved at epoch 3 with val mean-roc_auc_score: 0.8505
121
+ 2025-09-27 07:01:37,059 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 4/100 | Train Loss: 0.3716 | Val mean-roc_auc_score: 0.8409
122
+ 2025-09-27 07:01:41,788 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 5/100 | Train Loss: 0.3497 | Val mean-roc_auc_score: 0.8105
123
+ 2025-09-27 07:01:46,458 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 6/100 | Train Loss: 0.3111 | Val mean-roc_auc_score: 0.8648
124
+ 2025-09-27 07:01:46,978 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Global step of best model: 222
125
+ 2025-09-27 07:01:47,574 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Best model saved at epoch 6 with val mean-roc_auc_score: 0.8648
126
+ 2025-09-27 07:01:52,205 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 7/100 | Train Loss: 0.2821 | Val mean-roc_auc_score: 0.8442
127
+ 2025-09-27 07:01:54,138 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 8/100 | Train Loss: 0.2736 | Val mean-roc_auc_score: 0.8763
128
+ 2025-09-27 07:01:54,339 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Global step of best model: 296
129
+ 2025-09-27 07:01:54,943 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Best model saved at epoch 8 with val mean-roc_auc_score: 0.8763
130
+ 2025-09-27 07:01:59,520 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 9/100 | Train Loss: 0.2519 | Val mean-roc_auc_score: 0.8684
131
+ 2025-09-27 07:02:04,088 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 10/100 | Train Loss: 0.2365 | Val mean-roc_auc_score: 0.8850
132
+ 2025-09-27 07:02:04,291 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Global step of best model: 370
133
+ 2025-09-27 07:02:04,896 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Best model saved at epoch 10 with val mean-roc_auc_score: 0.8850
134
+ 2025-09-27 07:02:09,649 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 11/100 | Train Loss: 0.2009 | Val mean-roc_auc_score: 0.8200
135
+ 2025-09-27 07:02:14,543 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 12/100 | Train Loss: 0.2348 | Val mean-roc_auc_score: 0.8470
136
+ 2025-09-27 07:02:19,039 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 13/100 | Train Loss: 0.1791 | Val mean-roc_auc_score: 0.8543
137
+ 2025-09-27 07:02:20,869 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 14/100 | Train Loss: 0.1571 | Val mean-roc_auc_score: 0.8583
138
+ 2025-09-27 07:02:25,521 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 15/100 | Train Loss: 0.1554 | Val mean-roc_auc_score: 0.8115
139
+ 2025-09-27 07:02:30,028 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 16/100 | Train Loss: 0.1689 | Val mean-roc_auc_score: 0.8362
140
+ 2025-09-27 07:02:34,871 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 17/100 | Train Loss: 0.1713 | Val mean-roc_auc_score: 0.8693
141
+ 2025-09-27 07:02:39,343 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 18/100 | Train Loss: 0.1115 | Val mean-roc_auc_score: 0.8000
142
+ 2025-09-27 07:02:44,011 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 19/100 | Train Loss: 0.1126 | Val mean-roc_auc_score: 0.8313
143
+ 2025-09-27 07:02:45,875 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 20/100 | Train Loss: 0.1060 | Val mean-roc_auc_score: 0.8011
144
+ 2025-09-27 07:02:50,350 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 21/100 | Train Loss: 0.1090 | Val mean-roc_auc_score: 0.7982
145
+ 2025-09-27 07:02:55,193 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 22/100 | Train Loss: 0.1384 | Val mean-roc_auc_score: 0.7558
146
+ 2025-09-27 07:02:59,662 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 23/100 | Train Loss: 0.0629 | Val mean-roc_auc_score: 0.7486
147
+ 2025-09-27 07:03:04,205 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 24/100 | Train Loss: 0.0562 | Val mean-roc_auc_score: 0.7502
148
+ 2025-09-27 07:03:09,053 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 25/100 | Train Loss: 0.0762 | Val mean-roc_auc_score: 0.7563
149
+ 2025-09-27 07:03:13,629 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 26/100 | Train Loss: 0.1512 | Val mean-roc_auc_score: 0.8430
150
+ 2025-09-27 07:03:16,805 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 27/100 | Train Loss: 0.0802 | Val mean-roc_auc_score: 0.8321
151
+ 2025-09-27 07:03:21,568 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 28/100 | Train Loss: 0.0608 | Val mean-roc_auc_score: 0.8244
152
+ 2025-09-27 07:03:26,260 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 29/100 | Train Loss: 0.0642 | Val mean-roc_auc_score: 0.8212
153
+ 2025-09-27 07:03:31,008 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 30/100 | Train Loss: 0.0469 | Val mean-roc_auc_score: 0.8051
154
+ 2025-09-27 07:03:35,866 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 31/100 | Train Loss: 0.0389 | Val mean-roc_auc_score: 0.8005
155
+ 2025-09-27 07:03:41,449 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 32/100 | Train Loss: 0.0263 | Val mean-roc_auc_score: 0.8116
156
+ 2025-09-27 07:03:43,728 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 33/100 | Train Loss: 0.0177 | Val mean-roc_auc_score: 0.7955
157
+ 2025-09-27 07:03:48,473 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 34/100 | Train Loss: 0.0165 | Val mean-roc_auc_score: 0.7942
158
+ 2025-09-27 07:03:53,253 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 35/100 | Train Loss: 0.0144 | Val mean-roc_auc_score: 0.7851
159
+ 2025-09-27 07:03:58,043 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 36/100 | Train Loss: 0.0107 | Val mean-roc_auc_score: 0.7843
160
+ 2025-09-27 07:04:02,871 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 37/100 | Train Loss: 0.0125 | Val mean-roc_auc_score: 0.7819
161
+ 2025-09-27 07:04:07,283 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 38/100 | Train Loss: 0.0358 | Val mean-roc_auc_score: 0.7908
162
+ 2025-09-27 07:04:08,990 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 39/100 | Train Loss: 0.0781 | Val mean-roc_auc_score: 0.8420
163
+ 2025-09-27 07:04:13,458 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 40/100 | Train Loss: 0.0503 | Val mean-roc_auc_score: 0.8627
164
+ 2025-09-27 07:04:17,996 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 41/100 | Train Loss: 0.0377 | Val mean-roc_auc_score: 0.8494
165
+ 2025-09-27 07:04:22,830 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 42/100 | Train Loss: 0.0186 | Val mean-roc_auc_score: 0.8241
166
+ 2025-09-27 07:04:27,417 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 43/100 | Train Loss: 0.0306 | Val mean-roc_auc_score: 0.7900
167
+ 2025-09-27 07:04:31,907 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 44/100 | Train Loss: 0.0206 | Val mean-roc_auc_score: 0.7831
168
+ 2025-09-27 07:04:36,334 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 45/100 | Train Loss: 0.0172 | Val mean-roc_auc_score: 0.7783
169
+ 2025-09-27 07:04:38,058 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 46/100 | Train Loss: 0.0398 | Val mean-roc_auc_score: 0.7772
170
+ 2025-09-27 07:04:42,789 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 47/100 | Train Loss: 0.0344 | Val mean-roc_auc_score: 0.7974
171
+ 2025-09-27 07:04:47,245 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 48/100 | Train Loss: 0.0642 | Val mean-roc_auc_score: 0.8292
172
+ 2025-09-27 07:04:52,111 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 49/100 | Train Loss: 0.0376 | Val mean-roc_auc_score: 0.8197
173
+ 2025-09-27 07:04:56,522 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 50/100 | Train Loss: 0.0146 | Val mean-roc_auc_score: 0.8029
174
+ 2025-09-27 07:05:00,950 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 51/100 | Train Loss: 0.0133 | Val mean-roc_auc_score: 0.7864
175
+ 2025-09-27 07:05:03,299 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 52/100 | Train Loss: 0.0071 | Val mean-roc_auc_score: 0.7917
176
+ 2025-09-27 07:05:07,809 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 53/100 | Train Loss: 0.0085 | Val mean-roc_auc_score: 0.7821
177
+ 2025-09-27 07:05:12,399 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 54/100 | Train Loss: 0.0070 | Val mean-roc_auc_score: 0.7809
178
+ 2025-09-27 07:05:17,872 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 55/100 | Train Loss: 0.0054 | Val mean-roc_auc_score: 0.7691
179
+ 2025-09-27 07:05:22,329 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 56/100 | Train Loss: 0.0041 | Val mean-roc_auc_score: 0.7694
180
+ 2025-09-27 07:05:27,075 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 57/100 | Train Loss: 0.0069 | Val mean-roc_auc_score: 0.7708
181
+ 2025-09-27 07:05:31,229 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 58/100 | Train Loss: 0.0034 | Val mean-roc_auc_score: 0.7697
182
+ 2025-09-27 07:05:33,359 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 59/100 | Train Loss: 0.0077 | Val mean-roc_auc_score: 0.7608
183
+ 2025-09-27 07:05:37,761 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 60/100 | Train Loss: 0.0030 | Val mean-roc_auc_score: 0.7661
184
+ 2025-09-27 07:05:42,184 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 61/100 | Train Loss: 0.0027 | Val mean-roc_auc_score: 0.7672
185
+ 2025-09-27 07:05:46,902 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 62/100 | Train Loss: 0.0032 | Val mean-roc_auc_score: 0.7676
186
+ 2025-09-27 07:05:51,426 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 63/100 | Train Loss: 0.0034 | Val mean-roc_auc_score: 0.7731
187
+ 2025-09-27 07:05:55,834 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 64/100 | Train Loss: 0.0051 | Val mean-roc_auc_score: 0.7624
188
+ 2025-09-27 07:05:57,570 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 65/100 | Train Loss: 0.0062 | Val mean-roc_auc_score: 0.7643
189
+ 2025-09-27 07:06:02,050 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 66/100 | Train Loss: 0.0033 | Val mean-roc_auc_score: 0.7632
190
+ 2025-09-27 07:06:06,755 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 67/100 | Train Loss: 0.0039 | Val mean-roc_auc_score: 0.7645
191
+ 2025-09-27 07:06:11,175 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 68/100 | Train Loss: 0.0045 | Val mean-roc_auc_score: 0.7666
192
+ 2025-09-27 07:06:15,606 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 69/100 | Train Loss: 0.0062 | Val mean-roc_auc_score: 0.7508
193
+ 2025-09-27 07:06:20,026 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 70/100 | Train Loss: 0.0431 | Val mean-roc_auc_score: 0.7447
194
+ 2025-09-27 07:06:24,546 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 71/100 | Train Loss: 0.0323 | Val mean-roc_auc_score: 0.7660
195
+ 2025-09-27 07:06:26,555 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 72/100 | Train Loss: 0.0090 | Val mean-roc_auc_score: 0.7584
196
+ 2025-09-27 07:06:30,972 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 73/100 | Train Loss: 0.0095 | Val mean-roc_auc_score: 0.7556
197
+ 2025-09-27 07:06:35,548 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 74/100 | Train Loss: 0.0139 | Val mean-roc_auc_score: 0.7946
198
+ 2025-09-27 07:06:40,006 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 75/100 | Train Loss: 0.0272 | Val mean-roc_auc_score: 0.7512
199
+ 2025-09-27 07:06:44,427 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 76/100 | Train Loss: 0.0068 | Val mean-roc_auc_score: 0.7579
200
+ 2025-09-27 07:06:49,177 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 77/100 | Train Loss: 0.0117 | Val mean-roc_auc_score: 0.7562
201
+ 2025-09-27 07:06:50,862 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 78/100 | Train Loss: 0.0040 | Val mean-roc_auc_score: 0.7579
202
+ 2025-09-27 07:06:55,309 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 79/100 | Train Loss: 0.0042 | Val mean-roc_auc_score: 0.7602
203
+ 2025-09-27 07:06:59,771 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 80/100 | Train Loss: 0.0034 | Val mean-roc_auc_score: 0.7611
204
+ 2025-09-27 07:07:04,358 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 81/100 | Train Loss: 0.0033 | Val mean-roc_auc_score: 0.7612
205
+ 2025-09-27 07:07:10,200 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 82/100 | Train Loss: 0.0033 | Val mean-roc_auc_score: 0.7601
206
+ 2025-09-27 07:07:14,818 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 83/100 | Train Loss: 0.0025 | Val mean-roc_auc_score: 0.7593
207
+ 2025-09-27 07:07:19,453 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 84/100 | Train Loss: 0.0029 | Val mean-roc_auc_score: 0.7593
208
+ 2025-09-27 07:07:21,134 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 85/100 | Train Loss: 0.0031 | Val mean-roc_auc_score: 0.7600
209
+ 2025-09-27 07:07:25,610 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 86/100 | Train Loss: 0.0084 | Val mean-roc_auc_score: 0.7595
210
+ 2025-09-27 07:07:30,371 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 87/100 | Train Loss: 0.0063 | Val mean-roc_auc_score: 0.7560
211
+ 2025-09-27 07:07:34,793 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 88/100 | Train Loss: 0.0020 | Val mean-roc_auc_score: 0.7575
212
+ 2025-09-27 07:07:39,261 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 89/100 | Train Loss: 0.0027 | Val mean-roc_auc_score: 0.7588
213
+ 2025-09-27 07:07:43,769 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 90/100 | Train Loss: 0.0032 | Val mean-roc_auc_score: 0.7566
214
+ 2025-09-27 07:07:45,465 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 91/100 | Train Loss: 0.0029 | Val mean-roc_auc_score: 0.7580
215
+ 2025-09-27 07:07:50,432 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 92/100 | Train Loss: 0.0027 | Val mean-roc_auc_score: 0.7583
216
+ 2025-09-27 07:07:54,872 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 93/100 | Train Loss: 0.0022 | Val mean-roc_auc_score: 0.7589
217
+ 2025-09-27 07:07:59,250 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 94/100 | Train Loss: 0.0073 | Val mean-roc_auc_score: 0.7614
218
+ 2025-09-27 07:08:03,719 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 95/100 | Train Loss: 0.0042 | Val mean-roc_auc_score: 0.7638
219
+ 2025-09-27 07:08:08,234 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 96/100 | Train Loss: 0.0055 | Val mean-roc_auc_score: 0.8135
220
+ 2025-09-27 07:08:13,538 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 97/100 | Train Loss: 0.0034 | Val mean-roc_auc_score: 0.7961
221
+ 2025-09-27 07:08:15,542 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 98/100 | Train Loss: 0.0017 | Val mean-roc_auc_score: 0.7954
222
+ 2025-09-27 07:08:20,586 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 99/100 | Train Loss: 0.0312 | Val mean-roc_auc_score: 0.7838
223
+ 2025-09-27 07:08:25,454 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 100/100 | Train Loss: 0.0348 | Val mean-roc_auc_score: 0.8084
224
+ 2025-09-27 07:08:25,884 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Test mean-roc_auc_score: 0.8767
225
+ 2025-09-27 07:08:26,256 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Starting triplicate run 3 for dataset cocrystal at 2025-09-27_07-08-26
226
+ 2025-09-27 07:08:31,063 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 1/100 | Train Loss: 0.5946 | Val mean-roc_auc_score: 0.8163
227
+ 2025-09-27 07:08:31,063 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Global step of best model: 37
228
+ 2025-09-27 07:08:31,964 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Best model saved at epoch 1 with val mean-roc_auc_score: 0.8163
229
+ 2025-09-27 07:08:36,783 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 2/100 | Train Loss: 0.4696 | Val mean-roc_auc_score: 0.8459
230
+ 2025-09-27 07:08:36,985 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Global step of best model: 74
231
+ 2025-09-27 07:08:37,631 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Best model saved at epoch 2 with val mean-roc_auc_score: 0.8459
232
+ 2025-09-27 07:08:42,398 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 3/100 | Train Loss: 0.3722 | Val mean-roc_auc_score: 0.8744
233
+ 2025-09-27 07:08:42,321 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Global step of best model: 111
234
+ 2025-09-27 07:08:40,500 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Best model saved at epoch 3 with val mean-roc_auc_score: 0.8744
235
+ 2025-09-27 07:08:45,770 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 4/100 | Train Loss: 0.3986 | Val mean-roc_auc_score: 0.8659
236
+ 2025-09-27 07:08:50,603 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 5/100 | Train Loss: 0.3598 | Val mean-roc_auc_score: 0.8812
237
+ 2025-09-27 07:08:50,811 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Global step of best model: 185
238
+ 2025-09-27 07:08:51,457 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Best model saved at epoch 5 with val mean-roc_auc_score: 0.8812
239
+ 2025-09-27 07:08:56,427 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 6/100 | Train Loss: 0.3480 | Val mean-roc_auc_score: 0.8692
240
+ 2025-09-27 07:09:01,508 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 7/100 | Train Loss: 0.3091 | Val mean-roc_auc_score: 0.8892
241
+ 2025-09-27 07:09:01,727 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Global step of best model: 259
242
+ 2025-09-27 07:09:02,346 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Best model saved at epoch 7 with val mean-roc_auc_score: 0.8892
243
+ 2025-09-27 07:09:07,434 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 8/100 | Train Loss: 0.2838 | Val mean-roc_auc_score: 0.8903
244
+ 2025-09-27 07:09:07,765 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Global step of best model: 296
245
+ 2025-09-27 07:09:08,424 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Best model saved at epoch 8 with val mean-roc_auc_score: 0.8903
246
+ 2025-09-27 07:09:10,357 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 9/100 | Train Loss: 0.2652 | Val mean-roc_auc_score: 0.8942
247
+ 2025-09-27 07:09:10,570 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Global step of best model: 333
248
+ 2025-09-27 07:09:11,217 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Best model saved at epoch 9 with val mean-roc_auc_score: 0.8942
249
+ 2025-09-27 07:09:15,773 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 10/100 | Train Loss: 0.2382 | Val mean-roc_auc_score: 0.9033
250
+ 2025-09-27 07:09:15,982 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Global step of best model: 370
251
+ 2025-09-27 07:09:16,601 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Best model saved at epoch 10 with val mean-roc_auc_score: 0.9033
252
+ 2025-09-27 07:09:21,564 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 11/100 | Train Loss: 0.2667 | Val mean-roc_auc_score: 0.8680
253
+ 2025-09-27 07:09:26,550 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 12/100 | Train Loss: 0.2086 | Val mean-roc_auc_score: 0.8727
254
+ 2025-09-27 07:09:31,087 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 13/100 | Train Loss: 0.1782 | Val mean-roc_auc_score: 0.8591
255
+ 2025-09-27 07:09:35,931 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 14/100 | Train Loss: 0.1936 | Val mean-roc_auc_score: 0.8618
256
+ 2025-09-27 07:09:37,722 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 15/100 | Train Loss: 0.1503 | Val mean-roc_auc_score: 0.8481
257
+ 2025-09-27 07:09:42,499 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 16/100 | Train Loss: 0.1461 | Val mean-roc_auc_score: 0.8523
258
+ 2025-09-27 07:09:47,362 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 17/100 | Train Loss: 0.1649 | Val mean-roc_auc_score: 0.8642
259
+ 2025-09-27 07:09:51,807 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 18/100 | Train Loss: 0.1275 | Val mean-roc_auc_score: 0.8556
260
+ 2025-09-27 07:09:56,241 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 19/100 | Train Loss: 0.0918 | Val mean-roc_auc_score: 0.8248
261
+ 2025-09-27 07:10:00,723 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 20/100 | Train Loss: 0.1098 | Val mean-roc_auc_score: 0.8473
262
+ 2025-09-27 07:10:02,473 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 21/100 | Train Loss: 0.1140 | Val mean-roc_auc_score: 0.8282
263
+ 2025-09-27 07:10:07,451 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 22/100 | Train Loss: 0.1038 | Val mean-roc_auc_score: 0.8484
264
+ 2025-09-27 07:10:12,225 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 23/100 | Train Loss: 0.1081 | Val mean-roc_auc_score: 0.8412
265
+ 2025-09-27 07:10:16,657 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 24/100 | Train Loss: 0.0916 | Val mean-roc_auc_score: 0.8663
266
+ 2025-09-27 07:10:21,127 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 25/100 | Train Loss: 0.0456 | Val mean-roc_auc_score: 0.8366
267
+ 2025-09-27 07:10:25,552 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 26/100 | Train Loss: 0.0557 | Val mean-roc_auc_score: 0.8665
268
+ 2025-09-27 07:10:31,339 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 27/100 | Train Loss: 0.0574 | Val mean-roc_auc_score: 0.8638
269
+ 2025-09-27 07:10:33,168 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 28/100 | Train Loss: 0.0638 | Val mean-roc_auc_score: 0.8593
270
+ 2025-09-27 07:10:37,620 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 29/100 | Train Loss: 0.0574 | Val mean-roc_auc_score: 0.8457
271
+ 2025-09-27 07:10:42,167 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 30/100 | Train Loss: 0.0197 | Val mean-roc_auc_score: 0.8511
272
+ 2025-09-27 07:10:46,592 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 31/100 | Train Loss: 0.0191 | Val mean-roc_auc_score: 0.8466
273
+ 2025-09-27 07:10:51,426 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 32/100 | Train Loss: 0.0353 | Val mean-roc_auc_score: 0.8299
274
+ 2025-09-27 07:10:55,868 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 33/100 | Train Loss: 0.1257 | Val mean-roc_auc_score: 0.8009
275
+ 2025-09-27 07:10:57,618 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 34/100 | Train Loss: 0.0583 | Val mean-roc_auc_score: 0.8400
276
+ 2025-09-27 07:11:02,249 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 35/100 | Train Loss: 0.0473 | Val mean-roc_auc_score: 0.8276
277
+ 2025-09-27 07:11:06,753 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 36/100 | Train Loss: 0.0247 | Val mean-roc_auc_score: 0.8525
278
+ 2025-09-27 07:11:11,446 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 37/100 | Train Loss: 0.0319 | Val mean-roc_auc_score: 0.8170
279
+ 2025-09-27 07:11:15,885 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 38/100 | Train Loss: 0.0288 | Val mean-roc_auc_score: 0.8244
280
+ 2025-09-27 07:11:20,769 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 39/100 | Train Loss: 0.0119 | Val mean-roc_auc_score: 0.8181
281
+ 2025-09-27 07:11:25,446 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 40/100 | Train Loss: 0.0105 | Val mean-roc_auc_score: 0.8248
282
+ 2025-09-27 07:11:27,289 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 41/100 | Train Loss: 0.0121 | Val mean-roc_auc_score: 0.8394
283
+ 2025-09-27 07:11:32,060 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 42/100 | Train Loss: 0.0267 | Val mean-roc_auc_score: 0.7975
284
+ 2025-09-27 07:11:36,513 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 43/100 | Train Loss: 0.0212 | Val mean-roc_auc_score: 0.7900
285
+ 2025-09-27 07:11:41,012 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 44/100 | Train Loss: 0.0589 | Val mean-roc_auc_score: 0.8317
286
+ 2025-09-27 07:11:45,518 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 45/100 | Train Loss: 0.0323 | Val mean-roc_auc_score: 0.8411
287
+ 2025-09-27 07:11:50,053 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 46/100 | Train Loss: 0.0029 | Val mean-roc_auc_score: 0.7788
288
+ 2025-09-27 07:11:52,152 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 47/100 | Train Loss: 0.0135 | Val mean-roc_auc_score: 0.8023
289
+ 2025-09-27 07:11:56,689 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 48/100 | Train Loss: 0.0216 | Val mean-roc_auc_score: 0.7655
290
+ 2025-09-27 07:12:01,072 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 49/100 | Train Loss: 0.0640 | Val mean-roc_auc_score: 0.7337
291
+ 2025-09-27 07:12:05,517 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 50/100 | Train Loss: 0.0289 | Val mean-roc_auc_score: 0.7642
292
+ 2025-09-27 07:12:09,965 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 51/100 | Train Loss: 0.0152 | Val mean-roc_auc_score: 0.7753
293
+ 2025-09-27 07:12:14,695 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 52/100 | Train Loss: 0.0083 | Val mean-roc_auc_score: 0.7749
294
+ 2025-09-27 07:12:19,085 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 53/100 | Train Loss: 0.0039 | Val mean-roc_auc_score: 0.7798
295
+ 2025-09-27 07:12:20,742 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 54/100 | Train Loss: 0.0055 | Val mean-roc_auc_score: 0.7818
296
+ 2025-09-27 07:12:26,203 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 55/100 | Train Loss: 0.0042 | Val mean-roc_auc_score: 0.7863
297
+ 2025-09-27 07:12:30,710 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 56/100 | Train Loss: 0.0048 | Val mean-roc_auc_score: 0.7729
298
+ 2025-09-27 07:12:35,556 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 57/100 | Train Loss: 0.0014 | Val mean-roc_auc_score: 0.7784
299
+ 2025-09-27 07:12:39,985 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 58/100 | Train Loss: 0.0031 | Val mean-roc_auc_score: 0.7818
300
+ 2025-09-27 07:12:44,701 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 59/100 | Train Loss: 0.0066 | Val mean-roc_auc_score: 0.7791
301
+ 2025-09-27 07:12:46,464 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 60/100 | Train Loss: 0.0449 | Val mean-roc_auc_score: 0.8267
302
+ 2025-09-27 07:12:50,971 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 61/100 | Train Loss: 0.0361 | Val mean-roc_auc_score: 0.8152
303
+ 2025-09-27 07:12:55,755 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 62/100 | Train Loss: 0.0198 | Val mean-roc_auc_score: 0.8141
304
+ 2025-09-27 07:13:00,052 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 63/100 | Train Loss: 0.0163 | Val mean-roc_auc_score: 0.7941
305
+ 2025-09-27 07:13:04,150 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 64/100 | Train Loss: 0.0050 | Val mean-roc_auc_score: 0.7969
306
+ 2025-09-27 07:13:08,319 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 65/100 | Train Loss: 0.0048 | Val mean-roc_auc_score: 0.8018
307
+ 2025-09-27 07:13:12,561 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 66/100 | Train Loss: 0.0047 | Val mean-roc_auc_score: 0.8033
308
+ 2025-09-27 07:13:14,273 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 67/100 | Train Loss: 0.0046 | Val mean-roc_auc_score: 0.7999
309
+ 2025-09-27 07:13:18,601 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 68/100 | Train Loss: 0.0051 | Val mean-roc_auc_score: 0.8091
310
+ 2025-09-27 07:13:22,673 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 69/100 | Train Loss: 0.0047 | Val mean-roc_auc_score: 0.8066
311
+ 2025-09-27 07:13:26,822 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 70/100 | Train Loss: 0.0023 | Val mean-roc_auc_score: 0.8117
312
+ 2025-09-27 07:13:30,928 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 71/100 | Train Loss: 0.0059 | Val mean-roc_auc_score: 0.8004
313
+ 2025-09-27 07:13:35,368 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 72/100 | Train Loss: 0.0063 | Val mean-roc_auc_score: 0.8130
314
+ 2025-09-27 07:13:39,679 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 73/100 | Train Loss: 0.0025 | Val mean-roc_auc_score: 0.8117
315
+ 2025-09-27 07:13:41,274 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 74/100 | Train Loss: 0.0024 | Val mean-roc_auc_score: 0.8117
316
+ 2025-09-27 07:13:45,393 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 75/100 | Train Loss: 0.0056 | Val mean-roc_auc_score: 0.7838
317
+ 2025-09-27 07:13:49,534 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 76/100 | Train Loss: 0.0178 | Val mean-roc_auc_score: 0.8237
318
+ 2025-09-27 07:13:54,110 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 77/100 | Train Loss: 0.0231 | Val mean-roc_auc_score: 0.8351
319
+ 2025-09-27 07:13:58,290 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 78/100 | Train Loss: 0.0087 | Val mean-roc_auc_score: 0.8254
320
+ 2025-09-27 07:14:02,567 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 79/100 | Train Loss: 0.0056 | Val mean-roc_auc_score: 0.8224
321
+ 2025-09-27 07:14:06,923 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 80/100 | Train Loss: 0.0065 | Val mean-roc_auc_score: 0.8063
322
+ 2025-09-27 07:14:08,430 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 81/100 | Train Loss: 0.0029 | Val mean-roc_auc_score: 0.8090
323
+ 2025-09-27 07:14:14,104 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 82/100 | Train Loss: 0.0037 | Val mean-roc_auc_score: 0.8172
324
+ 2025-09-27 07:14:18,539 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 83/100 | Train Loss: 0.0025 | Val mean-roc_auc_score: 0.8121
325
+ 2025-09-27 07:14:22,947 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 84/100 | Train Loss: 0.0013 | Val mean-roc_auc_score: 0.8139
326
+ 2025-09-27 07:14:27,200 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 85/100 | Train Loss: 0.0025 | Val mean-roc_auc_score: 0.8159
327
+ 2025-09-27 07:14:31,398 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 86/100 | Train Loss: 0.0025 | Val mean-roc_auc_score: 0.8141
328
+ 2025-09-27 07:14:36,014 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 87/100 | Train Loss: 0.0016 | Val mean-roc_auc_score: 0.8145
329
+ 2025-09-27 07:14:37,505 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 88/100 | Train Loss: 0.0012 | Val mean-roc_auc_score: 0.8156
330
+ 2025-09-27 07:14:41,762 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 89/100 | Train Loss: 0.0027 | Val mean-roc_auc_score: 0.8136
331
+ 2025-09-27 07:14:46,054 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 90/100 | Train Loss: 0.0024 | Val mean-roc_auc_score: 0.8124
332
+ 2025-09-27 07:14:50,248 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 91/100 | Train Loss: 0.0022 | Val mean-roc_auc_score: 0.8125
333
+ 2025-09-27 07:14:54,736 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 92/100 | Train Loss: 0.0010 | Val mean-roc_auc_score: 0.8141
334
+ 2025-09-27 07:14:59,136 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 93/100 | Train Loss: 0.0016 | Val mean-roc_auc_score: 0.8149
335
+ 2025-09-27 07:15:03,665 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 94/100 | Train Loss: 0.0079 | Val mean-roc_auc_score: 0.8182
336
+ 2025-09-27 07:15:05,130 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 95/100 | Train Loss: 0.0059 | Val mean-roc_auc_score: 0.8175
337
+ 2025-09-27 07:15:09,562 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 96/100 | Train Loss: 0.0016 | Val mean-roc_auc_score: 0.8169
338
+ 2025-09-27 07:15:14,174 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 97/100 | Train Loss: 0.0029 | Val mean-roc_auc_score: 0.8127
339
+ 2025-09-27 07:15:18,406 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 98/100 | Train Loss: 0.0023 | Val mean-roc_auc_score: 0.8139
340
+ 2025-09-27 07:15:22,701 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 99/100 | Train Loss: 0.0021 | Val mean-roc_auc_score: 0.8194
341
+ 2025-09-27 07:15:26,824 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 100/100 | Train Loss: 0.0016 | Val mean-roc_auc_score: 0.8177
342
+ 2025-09-27 07:15:27,214 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Test mean-roc_auc_score: 0.8599
343
+ 2025-09-27 07:15:27,498 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Final Triplicate Test Results — Avg mean-roc_auc_score: 0.8654, Std Dev: 0.0080
logs_modchembert_classification_ModChemBERT-MLM-DAPT-TAFT-OPT/modchembert_deepchem_splits_run_covid19_epochs100_batch_size32_20250927_065342.log ADDED
@@ -0,0 +1,331 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ 2025-09-27 06:53:42,820 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Running benchmark for dataset: covid19
2
+ 2025-09-27 06:53:42,821 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - dataset: covid19, tasks: ['label'], epochs: 100, learning rate: 3e-05
3
+ 2025-09-27 06:53:42,830 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Starting triplicate run 1 for dataset covid19 at 2025-09-27_06-53-42
4
+ 2025-09-27 06:53:54,457 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 1/100 | Train Loss: 0.5346 | Val mean-roc_auc_score: 0.8325
5
+ 2025-09-27 06:53:54,457 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Global step of best model: 65
6
+ 2025-09-27 06:53:55,371 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Best model saved at epoch 1 with val mean-roc_auc_score: 0.8325
7
+ 2025-09-27 06:54:04,857 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 2/100 | Train Loss: 0.4125 | Val mean-roc_auc_score: 0.8493
8
+ 2025-09-27 06:54:05,056 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Global step of best model: 130
9
+ 2025-09-27 06:54:05,747 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Best model saved at epoch 2 with val mean-roc_auc_score: 0.8493
10
+ 2025-09-27 06:54:12,443 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 3/100 | Train Loss: 0.3615 | Val mean-roc_auc_score: 0.8263
11
+ 2025-09-27 06:54:21,931 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 4/100 | Train Loss: 0.3000 | Val mean-roc_auc_score: 0.8540
12
+ 2025-09-27 06:54:22,125 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Global step of best model: 260
13
+ 2025-09-27 06:54:22,761 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Best model saved at epoch 4 with val mean-roc_auc_score: 0.8540
14
+ 2025-09-27 06:54:32,520 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 5/100 | Train Loss: 0.2650 | Val mean-roc_auc_score: 0.8443
15
+ 2025-09-27 06:54:39,636 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 6/100 | Train Loss: 0.1990 | Val mean-roc_auc_score: 0.8292
16
+ 2025-09-27 06:54:49,954 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 7/100 | Train Loss: 0.1602 | Val mean-roc_auc_score: 0.8166
17
+ 2025-09-27 06:55:00,078 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 8/100 | Train Loss: 0.1508 | Val mean-roc_auc_score: 0.8314
18
+ 2025-09-27 06:55:07,192 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 9/100 | Train Loss: 0.1120 | Val mean-roc_auc_score: 0.8422
19
+ 2025-09-27 06:55:17,167 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 10/100 | Train Loss: 0.0766 | Val mean-roc_auc_score: 0.8282
20
+ 2025-09-27 06:55:27,071 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 11/100 | Train Loss: 0.1062 | Val mean-roc_auc_score: 0.8349
21
+ 2025-09-27 06:55:34,590 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 12/100 | Train Loss: 0.0563 | Val mean-roc_auc_score: 0.8361
22
+ 2025-09-27 06:55:44,376 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 13/100 | Train Loss: 0.0545 | Val mean-roc_auc_score: 0.8276
23
+ 2025-09-27 06:55:54,257 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 14/100 | Train Loss: 0.0457 | Val mean-roc_auc_score: 0.8320
24
+ 2025-09-27 06:56:01,307 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 15/100 | Train Loss: 0.0329 | Val mean-roc_auc_score: 0.8395
25
+ 2025-09-27 06:56:12,068 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 16/100 | Train Loss: 0.0357 | Val mean-roc_auc_score: 0.8277
26
+ 2025-09-27 06:56:22,162 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 17/100 | Train Loss: 0.0118 | Val mean-roc_auc_score: 0.8402
27
+ 2025-09-27 06:56:29,009 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 18/100 | Train Loss: 0.0320 | Val mean-roc_auc_score: 0.8353
28
+ 2025-09-27 06:56:38,771 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 19/100 | Train Loss: 0.0328 | Val mean-roc_auc_score: 0.8378
29
+ 2025-09-27 06:56:48,510 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 20/100 | Train Loss: 0.0325 | Val mean-roc_auc_score: 0.8331
30
+ 2025-09-27 06:56:55,388 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 21/100 | Train Loss: 0.0288 | Val mean-roc_auc_score: 0.8300
31
+ 2025-09-27 06:57:05,384 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 22/100 | Train Loss: 0.0484 | Val mean-roc_auc_score: 0.8389
32
+ 2025-09-27 06:57:15,344 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 23/100 | Train Loss: 0.0327 | Val mean-roc_auc_score: 0.8316
33
+ 2025-09-27 06:57:22,523 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 24/100 | Train Loss: 0.0339 | Val mean-roc_auc_score: 0.8304
34
+ 2025-09-27 06:57:32,895 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 25/100 | Train Loss: 0.0525 | Val mean-roc_auc_score: 0.8280
35
+ 2025-09-27 06:57:43,084 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 26/100 | Train Loss: 0.0579 | Val mean-roc_auc_score: 0.8382
36
+ 2025-09-27 06:57:51,041 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 27/100 | Train Loss: 0.0457 | Val mean-roc_auc_score: 0.8255
37
+ 2025-09-27 06:58:01,205 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 28/100 | Train Loss: 0.0361 | Val mean-roc_auc_score: 0.8252
38
+ 2025-09-27 06:58:11,487 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 29/100 | Train Loss: 0.0262 | Val mean-roc_auc_score: 0.8338
39
+ 2025-09-27 06:58:19,344 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 30/100 | Train Loss: 0.0314 | Val mean-roc_auc_score: 0.8181
40
+ 2025-09-27 06:58:31,097 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 31/100 | Train Loss: 0.0411 | Val mean-roc_auc_score: 0.8436
41
+ 2025-09-27 06:58:42,213 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 32/100 | Train Loss: 0.0312 | Val mean-roc_auc_score: 0.8416
42
+ 2025-09-27 06:58:49,877 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 33/100 | Train Loss: 0.0248 | Val mean-roc_auc_score: 0.8419
43
+ 2025-09-27 06:59:00,214 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 34/100 | Train Loss: 0.0217 | Val mean-roc_auc_score: 0.8418
44
+ 2025-09-27 06:59:08,107 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 35/100 | Train Loss: 0.0215 | Val mean-roc_auc_score: 0.8405
45
+ 2025-09-27 06:59:18,654 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 36/100 | Train Loss: 0.0285 | Val mean-roc_auc_score: 0.8440
46
+ 2025-09-27 06:59:29,266 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 37/100 | Train Loss: 0.0177 | Val mean-roc_auc_score: 0.8430
47
+ 2025-09-27 06:59:36,533 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 38/100 | Train Loss: 0.0208 | Val mean-roc_auc_score: 0.8434
48
+ 2025-09-27 06:59:46,610 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 39/100 | Train Loss: 0.0251 | Val mean-roc_auc_score: 0.8411
49
+ 2025-09-27 06:59:56,561 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 40/100 | Train Loss: 0.0201 | Val mean-roc_auc_score: 0.8364
50
+ 2025-09-27 07:00:03,683 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 41/100 | Train Loss: 0.0201 | Val mean-roc_auc_score: 0.8449
51
+ 2025-09-27 07:00:14,068 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 42/100 | Train Loss: 0.0249 | Val mean-roc_auc_score: 0.8371
52
+ 2025-09-27 07:00:24,011 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 43/100 | Train Loss: 0.0227 | Val mean-roc_auc_score: 0.8396
53
+ 2025-09-27 07:00:31,097 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 44/100 | Train Loss: 0.0255 | Val mean-roc_auc_score: 0.8293
54
+ 2025-09-27 07:00:41,085 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 45/100 | Train Loss: 0.0156 | Val mean-roc_auc_score: 0.8373
55
+ 2025-09-27 07:00:50,915 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 46/100 | Train Loss: 0.0236 | Val mean-roc_auc_score: 0.8333
56
+ 2025-09-27 07:00:59,637 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 47/100 | Train Loss: 0.0420 | Val mean-roc_auc_score: 0.8413
57
+ 2025-09-27 07:01:09,468 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 48/100 | Train Loss: 0.0220 | Val mean-roc_auc_score: 0.8374
58
+ 2025-09-27 07:01:19,463 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 49/100 | Train Loss: 0.0190 | Val mean-roc_auc_score: 0.8358
59
+ 2025-09-27 07:01:26,817 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 50/100 | Train Loss: 0.0205 | Val mean-roc_auc_score: 0.8409
60
+ 2025-09-27 07:01:36,885 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 51/100 | Train Loss: 0.0164 | Val mean-roc_auc_score: 0.8384
61
+ 2025-09-27 07:01:47,466 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 52/100 | Train Loss: 0.0163 | Val mean-roc_auc_score: 0.8405
62
+ 2025-09-27 07:01:54,722 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 53/100 | Train Loss: 0.0222 | Val mean-roc_auc_score: 0.8371
63
+ 2025-09-27 07:02:04,802 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 54/100 | Train Loss: 0.0208 | Val mean-roc_auc_score: 0.8376
64
+ 2025-09-27 07:02:14,764 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 55/100 | Train Loss: 0.0204 | Val mean-roc_auc_score: 0.8387
65
+ 2025-09-27 07:02:21,902 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 56/100 | Train Loss: 0.0160 | Val mean-roc_auc_score: 0.8412
66
+ 2025-09-27 07:02:32,027 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 57/100 | Train Loss: 0.0199 | Val mean-roc_auc_score: 0.8406
67
+ 2025-09-27 07:02:41,883 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 58/100 | Train Loss: 0.0168 | Val mean-roc_auc_score: 0.8403
68
+ 2025-09-27 07:02:49,007 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 59/100 | Train Loss: 0.0213 | Val mean-roc_auc_score: 0.8427
69
+ 2025-09-27 07:02:58,873 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 60/100 | Train Loss: 0.0159 | Val mean-roc_auc_score: 0.8456
70
+ 2025-09-27 07:03:08,779 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 61/100 | Train Loss: 0.0245 | Val mean-roc_auc_score: 0.8386
71
+ 2025-09-27 07:03:17,451 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 62/100 | Train Loss: 0.0234 | Val mean-roc_auc_score: 0.8325
72
+ 2025-09-27 07:03:27,565 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 63/100 | Train Loss: 0.0240 | Val mean-roc_auc_score: 0.8411
73
+ 2025-09-27 07:03:38,086 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 64/100 | Train Loss: 0.0271 | Val mean-roc_auc_score: 0.8390
74
+ 2025-09-27 07:03:45,818 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 65/100 | Train Loss: 0.0111 | Val mean-roc_auc_score: 0.8400
75
+ 2025-09-27 07:03:56,199 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 66/100 | Train Loss: 0.0162 | Val mean-roc_auc_score: 0.8389
76
+ 2025-09-27 07:04:06,431 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 67/100 | Train Loss: 0.0199 | Val mean-roc_auc_score: 0.8462
77
+ 2025-09-27 07:04:13,631 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 68/100 | Train Loss: 0.0348 | Val mean-roc_auc_score: 0.8511
78
+ 2025-09-27 07:04:23,571 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 69/100 | Train Loss: 0.0308 | Val mean-roc_auc_score: 0.8427
79
+ 2025-09-27 07:04:33,568 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 70/100 | Train Loss: 0.0202 | Val mean-roc_auc_score: 0.8430
80
+ 2025-09-27 07:04:40,731 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 71/100 | Train Loss: 0.0174 | Val mean-roc_auc_score: 0.8423
81
+ 2025-09-27 07:04:50,961 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 72/100 | Train Loss: 0.0178 | Val mean-roc_auc_score: 0.8428
82
+ 2025-09-27 07:05:00,842 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 73/100 | Train Loss: 0.0149 | Val mean-roc_auc_score: 0.8415
83
+ 2025-09-27 07:05:08,269 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 74/100 | Train Loss: 0.0169 | Val mean-roc_auc_score: 0.8431
84
+ 2025-09-27 07:05:18,124 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 75/100 | Train Loss: 0.0153 | Val mean-roc_auc_score: 0.8441
85
+ 2025-09-27 07:05:27,921 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 76/100 | Train Loss: 0.0170 | Val mean-roc_auc_score: 0.8467
86
+ 2025-09-27 07:05:36,754 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 77/100 | Train Loss: 0.0215 | Val mean-roc_auc_score: 0.8423
87
+ 2025-09-27 07:05:46,722 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 78/100 | Train Loss: 0.0156 | Val mean-roc_auc_score: 0.8400
88
+ 2025-09-27 07:05:56,716 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 79/100 | Train Loss: 0.0179 | Val mean-roc_auc_score: 0.8418
89
+ 2025-09-27 07:06:04,279 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 80/100 | Train Loss: 0.0160 | Val mean-roc_auc_score: 0.8418
90
+ 2025-09-27 07:06:14,158 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 81/100 | Train Loss: 0.0162 | Val mean-roc_auc_score: 0.8419
91
+ 2025-09-27 07:06:24,527 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 82/100 | Train Loss: 0.0150 | Val mean-roc_auc_score: 0.8421
92
+ 2025-09-27 07:06:31,811 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 83/100 | Train Loss: 0.0151 | Val mean-roc_auc_score: 0.8419
93
+ 2025-09-27 07:06:41,750 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 84/100 | Train Loss: 0.0157 | Val mean-roc_auc_score: 0.8416
94
+ 2025-09-27 07:06:51,702 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 85/100 | Train Loss: 0.0100 | Val mean-roc_auc_score: 0.8419
95
+ 2025-09-27 07:06:58,795 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 86/100 | Train Loss: 0.0163 | Val mean-roc_auc_score: 0.8402
96
+ 2025-09-27 07:07:09,084 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 87/100 | Train Loss: 0.0141 | Val mean-roc_auc_score: 0.8390
97
+ 2025-09-27 07:07:19,055 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 88/100 | Train Loss: 0.0190 | Val mean-roc_auc_score: 0.8402
98
+ 2025-09-27 07:07:26,214 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 89/100 | Train Loss: 0.0147 | Val mean-roc_auc_score: 0.8392
99
+ 2025-09-27 07:07:36,151 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 90/100 | Train Loss: 0.0155 | Val mean-roc_auc_score: 0.8398
100
+ 2025-09-27 07:07:46,088 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 91/100 | Train Loss: 0.0152 | Val mean-roc_auc_score: 0.8390
101
+ 2025-09-27 07:07:53,623 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 92/100 | Train Loss: 0.0154 | Val mean-roc_auc_score: 0.8401
102
+ 2025-09-27 07:08:04,655 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 93/100 | Train Loss: 0.0150 | Val mean-roc_auc_score: 0.8396
103
+ 2025-09-27 07:08:14,861 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 94/100 | Train Loss: 0.0126 | Val mean-roc_auc_score: 0.8390
104
+ 2025-09-27 07:08:22,633 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 95/100 | Train Loss: 0.0155 | Val mean-roc_auc_score: 0.8385
105
+ 2025-09-27 07:08:33,408 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 96/100 | Train Loss: 0.0182 | Val mean-roc_auc_score: 0.8372
106
+ 2025-09-27 07:08:41,131 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 97/100 | Train Loss: 0.0209 | Val mean-roc_auc_score: 0.8380
107
+ 2025-09-27 07:08:51,478 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 98/100 | Train Loss: 0.0147 | Val mean-roc_auc_score: 0.8377
108
+ 2025-09-27 07:09:01,676 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 99/100 | Train Loss: 0.0163 | Val mean-roc_auc_score: 0.8372
109
+ 2025-09-27 07:09:08,842 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 100/100 | Train Loss: 0.0149 | Val mean-roc_auc_score: 0.8412
110
+ 2025-09-27 07:09:09,714 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Test mean-roc_auc_score: 0.8149
111
+ 2025-09-27 07:09:10,037 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Starting triplicate run 2 for dataset covid19 at 2025-09-27_07-09-10
112
+ 2025-09-27 07:09:19,149 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 1/100 | Train Loss: 0.5192 | Val mean-roc_auc_score: 0.8342
113
+ 2025-09-27 07:09:19,149 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Global step of best model: 65
114
+ 2025-09-27 07:09:20,205 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Best model saved at epoch 1 with val mean-roc_auc_score: 0.8342
115
+ 2025-09-27 07:09:30,513 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 2/100 | Train Loss: 0.4000 | Val mean-roc_auc_score: 0.8478
116
+ 2025-09-27 07:09:30,727 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Global step of best model: 130
117
+ 2025-09-27 07:09:31,356 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Best model saved at epoch 2 with val mean-roc_auc_score: 0.8478
118
+ 2025-09-27 07:09:38,775 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 3/100 | Train Loss: 0.3615 | Val mean-roc_auc_score: 0.8549
119
+ 2025-09-27 07:09:38,986 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Global step of best model: 195
120
+ 2025-09-27 07:09:39,763 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Best model saved at epoch 3 with val mean-roc_auc_score: 0.8549
121
+ 2025-09-27 07:09:50,315 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 4/100 | Train Loss: 0.3042 | Val mean-roc_auc_score: 0.8473
122
+ 2025-09-27 07:10:00,417 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 5/100 | Train Loss: 0.2587 | Val mean-roc_auc_score: 0.8478
123
+ 2025-09-27 07:10:07,606 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 6/100 | Train Loss: 0.1913 | Val mean-roc_auc_score: 0.8437
124
+ 2025-09-27 07:10:17,777 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 7/100 | Train Loss: 0.1256 | Val mean-roc_auc_score: 0.8367
125
+ 2025-09-27 07:10:27,570 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 8/100 | Train Loss: 0.1164 | Val mean-roc_auc_score: 0.8448
126
+ 2025-09-27 07:10:34,692 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 9/100 | Train Loss: 0.0899 | Val mean-roc_auc_score: 0.8398
127
+ 2025-09-27 07:10:44,762 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 10/100 | Train Loss: 0.0819 | Val mean-roc_auc_score: 0.8306
128
+ 2025-09-27 07:10:54,658 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 11/100 | Train Loss: 0.0661 | Val mean-roc_auc_score: 0.8146
129
+ 2025-09-27 07:11:02,327 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 12/100 | Train Loss: 0.0721 | Val mean-roc_auc_score: 0.8171
130
+ 2025-09-27 07:11:12,170 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 13/100 | Train Loss: 0.0531 | Val mean-roc_auc_score: 0.8249
131
+ 2025-09-27 07:11:22,184 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 14/100 | Train Loss: 0.0605 | Val mean-roc_auc_score: 0.8245
132
+ 2025-09-27 07:11:29,288 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 15/100 | Train Loss: 0.0601 | Val mean-roc_auc_score: 0.8238
133
+ 2025-09-27 07:11:40,320 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 16/100 | Train Loss: 0.0504 | Val mean-roc_auc_score: 0.8186
134
+ 2025-09-27 07:11:50,685 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 17/100 | Train Loss: 0.0711 | Val mean-roc_auc_score: 0.8296
135
+ 2025-09-27 07:11:58,014 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 18/100 | Train Loss: 0.0361 | Val mean-roc_auc_score: 0.8219
136
+ 2025-09-27 07:12:08,187 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 19/100 | Train Loss: 0.0282 | Val mean-roc_auc_score: 0.8192
137
+ 2025-09-27 07:12:18,046 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 20/100 | Train Loss: 0.0351 | Val mean-roc_auc_score: 0.8251
138
+ 2025-09-27 07:12:25,164 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 21/100 | Train Loss: 0.0297 | Val mean-roc_auc_score: 0.8261
139
+ 2025-09-27 07:12:35,516 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 22/100 | Train Loss: 0.0318 | Val mean-roc_auc_score: 0.8246
140
+ 2025-09-27 07:12:45,757 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 23/100 | Train Loss: 0.0377 | Val mean-roc_auc_score: 0.8349
141
+ 2025-09-27 07:12:52,937 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 24/100 | Train Loss: 0.0293 | Val mean-roc_auc_score: 0.8364
142
+ 2025-09-27 07:13:02,779 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 25/100 | Train Loss: 0.0277 | Val mean-roc_auc_score: 0.8307
143
+ 2025-09-27 07:13:12,743 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 26/100 | Train Loss: 0.0263 | Val mean-roc_auc_score: 0.8277
144
+ 2025-09-27 07:13:20,070 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 27/100 | Train Loss: 0.0276 | Val mean-roc_auc_score: 0.8375
145
+ 2025-09-27 07:13:29,834 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 28/100 | Train Loss: 0.0582 | Val mean-roc_auc_score: 0.8372
146
+ 2025-09-27 07:13:39,575 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 29/100 | Train Loss: 0.0293 | Val mean-roc_auc_score: 0.8366
147
+ 2025-09-27 07:13:46,977 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 30/100 | Train Loss: 0.0239 | Val mean-roc_auc_score: 0.8321
148
+ 2025-09-27 07:13:57,937 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 31/100 | Train Loss: 0.0141 | Val mean-roc_auc_score: 0.8388
149
+ 2025-09-27 07:14:08,351 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 32/100 | Train Loss: 0.0240 | Val mean-roc_auc_score: 0.8288
150
+ 2025-09-27 07:14:15,543 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 33/100 | Train Loss: 0.0218 | Val mean-roc_auc_score: 0.8324
151
+ 2025-09-27 07:14:25,542 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 34/100 | Train Loss: 0.0235 | Val mean-roc_auc_score: 0.8307
152
+ 2025-09-27 07:14:35,404 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 35/100 | Train Loss: 0.0220 | Val mean-roc_auc_score: 0.8352
153
+ 2025-09-27 07:14:42,555 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 36/100 | Train Loss: 0.0173 | Val mean-roc_auc_score: 0.8334
154
+ 2025-09-27 07:14:52,739 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 37/100 | Train Loss: 0.0101 | Val mean-roc_auc_score: 0.8354
155
+ 2025-09-27 07:15:02,571 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 38/100 | Train Loss: 0.0215 | Val mean-roc_auc_score: 0.8309
156
+ 2025-09-27 07:15:09,744 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 39/100 | Train Loss: 0.0241 | Val mean-roc_auc_score: 0.8310
157
+ 2025-09-27 07:15:19,664 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 40/100 | Train Loss: 0.0180 | Val mean-roc_auc_score: 0.8360
158
+ 2025-09-27 07:15:29,499 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 41/100 | Train Loss: 0.0201 | Val mean-roc_auc_score: 0.8325
159
+ 2025-09-27 07:15:36,561 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 42/100 | Train Loss: 0.0206 | Val mean-roc_auc_score: 0.8341
160
+ 2025-09-27 07:15:45,981 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 43/100 | Train Loss: 0.0194 | Val mean-roc_auc_score: 0.8345
161
+ 2025-09-27 07:15:55,500 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 44/100 | Train Loss: 0.0184 | Val mean-roc_auc_score: 0.8343
162
+ 2025-09-27 07:16:02,248 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 45/100 | Train Loss: 0.0344 | Val mean-roc_auc_score: 0.8441
163
+ 2025-09-27 07:16:11,708 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 46/100 | Train Loss: 0.0329 | Val mean-roc_auc_score: 0.8374
164
+ 2025-09-27 07:16:22,611 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 47/100 | Train Loss: 0.0195 | Val mean-roc_auc_score: 0.8383
165
+ 2025-09-27 07:16:29,400 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 48/100 | Train Loss: 0.0186 | Val mean-roc_auc_score: 0.8359
166
+ 2025-09-27 07:16:38,869 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 49/100 | Train Loss: 0.0174 | Val mean-roc_auc_score: 0.8382
167
+ 2025-09-27 07:16:48,380 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 50/100 | Train Loss: 0.0509 | Val mean-roc_auc_score: 0.8237
168
+ 2025-09-27 07:16:55,105 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 51/100 | Train Loss: 0.0299 | Val mean-roc_auc_score: 0.8194
169
+ 2025-09-27 07:17:04,818 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 52/100 | Train Loss: 0.0243 | Val mean-roc_auc_score: 0.8202
170
+ 2025-09-27 07:17:14,270 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 53/100 | Train Loss: 0.0203 | Val mean-roc_auc_score: 0.8277
171
+ 2025-09-27 07:17:20,977 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 54/100 | Train Loss: 0.0146 | Val mean-roc_auc_score: 0.8268
172
+ 2025-09-27 07:17:30,442 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 55/100 | Train Loss: 0.0213 | Val mean-roc_auc_score: 0.8068
173
+ 2025-09-27 07:17:39,904 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 56/100 | Train Loss: 0.0719 | Val mean-roc_auc_score: 0.8154
174
+ 2025-09-27 07:17:47,017 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 57/100 | Train Loss: 0.0500 | Val mean-roc_auc_score: 0.8182
175
+ 2025-09-27 07:17:56,453 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 58/100 | Train Loss: 0.0346 | Val mean-roc_auc_score: 0.8151
176
+ 2025-09-27 07:18:05,946 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 59/100 | Train Loss: 0.0191 | Val mean-roc_auc_score: 0.8205
177
+ 2025-09-27 07:18:12,758 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 60/100 | Train Loss: 0.0212 | Val mean-roc_auc_score: 0.8181
178
+ 2025-09-27 07:18:22,129 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 61/100 | Train Loss: 0.0220 | Val mean-roc_auc_score: 0.8157
179
+ 2025-09-27 07:18:33,007 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 62/100 | Train Loss: 0.0182 | Val mean-roc_auc_score: 0.8178
180
+ 2025-09-27 07:18:42,440 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 63/100 | Train Loss: 0.0183 | Val mean-roc_auc_score: 0.8178
181
+ 2025-09-27 07:18:49,198 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 64/100 | Train Loss: 0.0202 | Val mean-roc_auc_score: 0.8164
182
+ 2025-09-27 07:18:58,679 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 65/100 | Train Loss: 0.0228 | Val mean-roc_auc_score: 0.8189
183
+ 2025-09-27 07:19:08,288 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 66/100 | Train Loss: 0.0180 | Val mean-roc_auc_score: 0.8184
184
+ 2025-09-27 07:19:15,590 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 67/100 | Train Loss: 0.0189 | Val mean-roc_auc_score: 0.8216
185
+ 2025-09-27 07:19:25,165 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 68/100 | Train Loss: 0.0236 | Val mean-roc_auc_score: 0.8204
186
+ 2025-09-27 07:19:34,693 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 69/100 | Train Loss: 0.0174 | Val mean-roc_auc_score: 0.8180
187
+ 2025-09-27 07:19:41,523 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 70/100 | Train Loss: 0.0173 | Val mean-roc_auc_score: 0.8201
188
+ 2025-09-27 07:19:51,074 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 71/100 | Train Loss: 0.0108 | Val mean-roc_auc_score: 0.8218
189
+ 2025-09-27 07:20:00,998 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 72/100 | Train Loss: 0.0178 | Val mean-roc_auc_score: 0.8198
190
+ 2025-09-27 07:20:07,862 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 73/100 | Train Loss: 0.0155 | Val mean-roc_auc_score: 0.8198
191
+ 2025-09-27 07:20:17,321 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 74/100 | Train Loss: 0.0078 | Val mean-roc_auc_score: 0.8199
192
+ 2025-09-27 07:20:26,750 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 75/100 | Train Loss: 0.0180 | Val mean-roc_auc_score: 0.8189
193
+ 2025-09-27 07:20:33,423 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 76/100 | Train Loss: 0.0254 | Val mean-roc_auc_score: 0.8137
194
+ 2025-09-27 07:20:44,275 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 77/100 | Train Loss: 0.0156 | Val mean-roc_auc_score: 0.8194
195
+ 2025-09-27 07:20:53,646 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 78/100 | Train Loss: 0.0178 | Val mean-roc_auc_score: 0.8197
196
+ 2025-09-27 07:21:00,314 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 79/100 | Train Loss: 0.0114 | Val mean-roc_auc_score: 0.8220
197
+ 2025-09-27 07:21:09,757 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 80/100 | Train Loss: 0.0150 | Val mean-roc_auc_score: 0.8221
198
+ 2025-09-27 07:21:19,330 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 81/100 | Train Loss: 0.0156 | Val mean-roc_auc_score: 0.8204
199
+ 2025-09-27 07:21:26,780 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 82/100 | Train Loss: 0.0105 | Val mean-roc_auc_score: 0.8212
200
+ 2025-09-27 07:21:36,447 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 83/100 | Train Loss: 0.0173 | Val mean-roc_auc_score: 0.8190
201
+ 2025-09-27 07:21:46,120 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 84/100 | Train Loss: 0.0180 | Val mean-roc_auc_score: 0.8198
202
+ 2025-09-27 07:21:52,752 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 85/100 | Train Loss: 0.0225 | Val mean-roc_auc_score: 0.8206
203
+ 2025-09-27 07:22:01,753 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 86/100 | Train Loss: 0.0159 | Val mean-roc_auc_score: 0.8201
204
+ 2025-09-27 07:22:11,069 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 87/100 | Train Loss: 0.0159 | Val mean-roc_auc_score: 0.8216
205
+ 2025-09-27 07:22:20,052 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 88/100 | Train Loss: 0.0163 | Val mean-roc_auc_score: 0.8216
206
+ 2025-09-27 07:22:26,398 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 89/100 | Train Loss: 0.0169 | Val mean-roc_auc_score: 0.8207
207
+ 2025-09-27 07:22:35,564 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 90/100 | Train Loss: 0.0173 | Val mean-roc_auc_score: 0.8225
208
+ 2025-09-27 07:22:44,791 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 91/100 | Train Loss: 0.0162 | Val mean-roc_auc_score: 0.8199
209
+ 2025-09-27 07:22:51,739 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 92/100 | Train Loss: 0.0160 | Val mean-roc_auc_score: 0.8230
210
+ 2025-09-27 07:23:02,203 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 93/100 | Train Loss: 0.0174 | Val mean-roc_auc_score: 0.8218
211
+ 2025-09-27 07:23:11,240 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 94/100 | Train Loss: 0.0234 | Val mean-roc_auc_score: 0.8236
212
+ 2025-09-27 07:23:17,448 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 95/100 | Train Loss: 0.0207 | Val mean-roc_auc_score: 0.8212
213
+ 2025-09-27 07:23:26,460 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 96/100 | Train Loss: 0.0185 | Val mean-roc_auc_score: 0.8240
214
+ 2025-09-27 07:23:35,728 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 97/100 | Train Loss: 0.0057 | Val mean-roc_auc_score: 0.8191
215
+ 2025-09-27 07:23:41,915 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 98/100 | Train Loss: 0.0134 | Val mean-roc_auc_score: 0.8215
216
+ 2025-09-27 07:23:50,772 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 99/100 | Train Loss: 0.0152 | Val mean-roc_auc_score: 0.8219
217
+ 2025-09-27 07:23:59,713 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 100/100 | Train Loss: 0.0160 | Val mean-roc_auc_score: 0.8216
218
+ 2025-09-27 07:24:00,500 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Test mean-roc_auc_score: 0.8361
219
+ 2025-09-27 07:24:00,829 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Starting triplicate run 3 for dataset covid19 at 2025-09-27_07-24-00
220
+ 2025-09-27 07:24:09,004 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 1/100 | Train Loss: 0.5423 | Val mean-roc_auc_score: 0.8314
221
+ 2025-09-27 07:24:09,004 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Global step of best model: 65
222
+ 2025-09-27 07:24:09,689 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Best model saved at epoch 1 with val mean-roc_auc_score: 0.8314
223
+ 2025-09-27 07:24:16,205 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 2/100 | Train Loss: 0.4542 | Val mean-roc_auc_score: 0.8382
224
+ 2025-09-27 07:24:16,392 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Global step of best model: 130
225
+ 2025-09-27 07:24:16,949 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Best model saved at epoch 2 with val mean-roc_auc_score: 0.8382
226
+ 2025-09-27 07:24:26,008 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 3/100 | Train Loss: 0.3769 | Val mean-roc_auc_score: 0.8424
227
+ 2025-09-27 07:24:26,202 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Global step of best model: 195
228
+ 2025-09-27 07:24:26,778 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Best model saved at epoch 3 with val mean-roc_auc_score: 0.8424
229
+ 2025-09-27 07:24:35,968 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 4/100 | Train Loss: 0.3271 | Val mean-roc_auc_score: 0.8485
230
+ 2025-09-27 07:24:36,168 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Global step of best model: 260
231
+ 2025-09-27 07:24:36,768 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Best model saved at epoch 4 with val mean-roc_auc_score: 0.8485
232
+ 2025-09-27 07:24:43,256 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 5/100 | Train Loss: 0.2975 | Val mean-roc_auc_score: 0.8390
233
+ 2025-09-27 07:24:52,390 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 6/100 | Train Loss: 0.2077 | Val mean-roc_auc_score: 0.8201
234
+ 2025-09-27 07:25:01,682 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 7/100 | Train Loss: 0.1659 | Val mean-roc_auc_score: 0.8542
235
+ 2025-09-27 07:25:01,868 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Global step of best model: 455
236
+ 2025-09-27 07:25:02,435 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Best model saved at epoch 7 with val mean-roc_auc_score: 0.8542
237
+ 2025-09-27 07:25:08,962 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 8/100 | Train Loss: 0.1688 | Val mean-roc_auc_score: 0.8325
238
+ 2025-09-27 07:25:17,989 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 9/100 | Train Loss: 0.1024 | Val mean-roc_auc_score: 0.8476
239
+ 2025-09-27 07:25:27,096 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 10/100 | Train Loss: 0.0725 | Val mean-roc_auc_score: 0.8319
240
+ 2025-09-27 07:25:33,379 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 11/100 | Train Loss: 0.0802 | Val mean-roc_auc_score: 0.8353
241
+ 2025-09-27 07:25:42,797 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 12/100 | Train Loss: 0.0587 | Val mean-roc_auc_score: 0.8312
242
+ 2025-09-27 07:25:51,727 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 13/100 | Train Loss: 0.0556 | Val mean-roc_auc_score: 0.8322
243
+ 2025-09-27 07:25:58,074 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 14/100 | Train Loss: 0.0477 | Val mean-roc_auc_score: 0.8273
244
+ 2025-09-27 07:26:07,122 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 15/100 | Train Loss: 0.0368 | Val mean-roc_auc_score: 0.8379
245
+ 2025-09-27 07:26:17,401 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 16/100 | Train Loss: 0.0410 | Val mean-roc_auc_score: 0.8309
246
+ 2025-09-27 07:26:24,087 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 17/100 | Train Loss: 0.0863 | Val mean-roc_auc_score: 0.8201
247
+ 2025-09-27 07:26:33,143 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 18/100 | Train Loss: 0.0538 | Val mean-roc_auc_score: 0.8366
248
+ 2025-09-27 07:26:42,244 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 19/100 | Train Loss: 0.0379 | Val mean-roc_auc_score: 0.8300
249
+ 2025-09-27 07:26:51,135 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 20/100 | Train Loss: 0.0349 | Val mean-roc_auc_score: 0.8224
250
+ 2025-09-27 07:26:57,416 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 21/100 | Train Loss: 0.0317 | Val mean-roc_auc_score: 0.8338
251
+ 2025-09-27 07:27:06,614 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 22/100 | Train Loss: 0.0268 | Val mean-roc_auc_score: 0.8408
252
+ 2025-09-27 07:27:15,706 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 23/100 | Train Loss: 0.0387 | Val mean-roc_auc_score: 0.8054
253
+ 2025-09-27 07:27:21,910 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 24/100 | Train Loss: 0.0672 | Val mean-roc_auc_score: 0.8447
254
+ 2025-09-27 07:27:31,020 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 25/100 | Train Loss: 0.0688 | Val mean-roc_auc_score: 0.8350
255
+ 2025-09-27 07:27:40,107 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 26/100 | Train Loss: 0.0404 | Val mean-roc_auc_score: 0.8261
256
+ 2025-09-27 07:27:46,930 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 27/100 | Train Loss: 0.0260 | Val mean-roc_auc_score: 0.8384
257
+ 2025-09-27 07:27:56,184 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 28/100 | Train Loss: 0.0252 | Val mean-roc_auc_score: 0.8371
258
+ 2025-09-27 07:28:05,409 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 29/100 | Train Loss: 0.0222 | Val mean-roc_auc_score: 0.8363
259
+ 2025-09-27 07:28:14,472 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 30/100 | Train Loss: 0.0238 | Val mean-roc_auc_score: 0.8385
260
+ 2025-09-27 07:28:22,165 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 31/100 | Train Loss: 0.0195 | Val mean-roc_auc_score: 0.8350
261
+ 2025-09-27 07:28:31,676 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 32/100 | Train Loss: 0.0273 | Val mean-roc_auc_score: 0.8386
262
+ 2025-09-27 07:28:40,764 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 33/100 | Train Loss: 0.0243 | Val mean-roc_auc_score: 0.8368
263
+ 2025-09-27 07:28:47,145 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 34/100 | Train Loss: 0.0220 | Val mean-roc_auc_score: 0.8385
264
+ 2025-09-27 07:28:56,221 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 35/100 | Train Loss: 0.0243 | Val mean-roc_auc_score: 0.8386
265
+ 2025-09-27 07:29:05,415 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 36/100 | Train Loss: 0.0225 | Val mean-roc_auc_score: 0.8438
266
+ 2025-09-27 07:29:12,056 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 37/100 | Train Loss: 0.0412 | Val mean-roc_auc_score: 0.8437
267
+ 2025-09-27 07:29:21,144 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 38/100 | Train Loss: 0.0213 | Val mean-roc_auc_score: 0.8373
268
+ 2025-09-27 07:29:30,013 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 39/100 | Train Loss: 0.0222 | Val mean-roc_auc_score: 0.8390
269
+ 2025-09-27 07:29:36,422 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 40/100 | Train Loss: 0.0209 | Val mean-roc_auc_score: 0.8407
270
+ 2025-09-27 07:29:45,367 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 41/100 | Train Loss: 0.0200 | Val mean-roc_auc_score: 0.8458
271
+ 2025-09-27 07:29:54,799 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 42/100 | Train Loss: 0.0189 | Val mean-roc_auc_score: 0.8464
272
+ 2025-09-27 07:30:03,800 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 43/100 | Train Loss: 0.0187 | Val mean-roc_auc_score: 0.8416
273
+ 2025-09-27 07:30:10,191 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 44/100 | Train Loss: 0.0197 | Val mean-roc_auc_score: 0.8394
274
+ 2025-09-27 07:30:19,118 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 45/100 | Train Loss: 0.0176 | Val mean-roc_auc_score: 0.8471
275
+ 2025-09-27 07:30:28,402 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 46/100 | Train Loss: 0.0222 | Val mean-roc_auc_score: 0.8409
276
+ 2025-09-27 07:30:36,266 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 47/100 | Train Loss: 0.0202 | Val mean-roc_auc_score: 0.8457
277
+ 2025-09-27 07:30:45,203 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 48/100 | Train Loss: 0.0197 | Val mean-roc_auc_score: 0.8406
278
+ 2025-09-27 07:30:54,249 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 49/100 | Train Loss: 0.0181 | Val mean-roc_auc_score: 0.8380
279
+ 2025-09-27 07:31:00,487 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 50/100 | Train Loss: 0.0156 | Val mean-roc_auc_score: 0.8415
280
+ 2025-09-27 07:31:09,490 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 51/100 | Train Loss: 0.0310 | Val mean-roc_auc_score: 0.8420
281
+ 2025-09-27 07:31:18,638 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 52/100 | Train Loss: 0.0186 | Val mean-roc_auc_score: 0.8418
282
+ 2025-09-27 07:31:25,018 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 53/100 | Train Loss: 0.0182 | Val mean-roc_auc_score: 0.8420
283
+ 2025-09-27 07:31:34,136 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 54/100 | Train Loss: 0.0311 | Val mean-roc_auc_score: 0.8455
284
+ 2025-09-27 07:31:43,195 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 55/100 | Train Loss: 0.0219 | Val mean-roc_auc_score: 0.8418
285
+ 2025-09-27 07:31:52,111 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 56/100 | Train Loss: 0.0230 | Val mean-roc_auc_score: 0.8421
286
+ 2025-09-27 07:31:58,862 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 57/100 | Train Loss: 0.0133 | Val mean-roc_auc_score: 0.8439
287
+ 2025-09-27 07:32:07,780 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 58/100 | Train Loss: 0.0172 | Val mean-roc_auc_score: 0.8484
288
+ 2025-09-27 07:32:16,888 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 59/100 | Train Loss: 0.0629 | Val mean-roc_auc_score: 0.8430
289
+ 2025-09-27 07:32:23,161 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 60/100 | Train Loss: 0.0308 | Val mean-roc_auc_score: 0.8400
290
+ 2025-09-27 07:32:32,293 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 61/100 | Train Loss: 0.0239 | Val mean-roc_auc_score: 0.8420
291
+ 2025-09-27 07:32:42,736 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 62/100 | Train Loss: 0.0189 | Val mean-roc_auc_score: 0.8462
292
+ 2025-09-27 07:32:48,901 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 63/100 | Train Loss: 0.0186 | Val mean-roc_auc_score: 0.8457
293
+ 2025-09-27 07:32:57,941 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 64/100 | Train Loss: 0.0195 | Val mean-roc_auc_score: 0.8441
294
+ 2025-09-27 07:33:06,829 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 65/100 | Train Loss: 0.0194 | Val mean-roc_auc_score: 0.8448
295
+ 2025-09-27 07:33:13,192 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 66/100 | Train Loss: 0.0175 | Val mean-roc_auc_score: 0.8451
296
+ 2025-09-27 07:33:22,416 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 67/100 | Train Loss: 0.0284 | Val mean-roc_auc_score: 0.8362
297
+ 2025-09-27 07:33:31,440 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 68/100 | Train Loss: 0.0170 | Val mean-roc_auc_score: 0.8415
298
+ 2025-09-27 07:33:40,415 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 69/100 | Train Loss: 0.0190 | Val mean-roc_auc_score: 0.8349
299
+ 2025-09-27 07:33:46,772 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 70/100 | Train Loss: 0.0188 | Val mean-roc_auc_score: 0.8443
300
+ 2025-09-27 07:33:55,732 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 71/100 | Train Loss: 0.0247 | Val mean-roc_auc_score: 0.8420
301
+ 2025-09-27 07:34:05,120 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 72/100 | Train Loss: 0.0160 | Val mean-roc_auc_score: 0.8428
302
+ 2025-09-27 07:34:11,296 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 73/100 | Train Loss: 0.0181 | Val mean-roc_auc_score: 0.8426
303
+ 2025-09-27 07:34:20,317 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 74/100 | Train Loss: 0.0262 | Val mean-roc_auc_score: 0.8435
304
+ 2025-09-27 07:34:29,243 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 75/100 | Train Loss: 0.0171 | Val mean-roc_auc_score: 0.8402
305
+ 2025-09-27 07:34:35,535 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 76/100 | Train Loss: 0.0164 | Val mean-roc_auc_score: 0.8435
306
+ 2025-09-27 07:34:45,950 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 77/100 | Train Loss: 0.0187 | Val mean-roc_auc_score: 0.8409
307
+ 2025-09-27 07:34:54,956 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 78/100 | Train Loss: 0.0151 | Val mean-roc_auc_score: 0.8392
308
+ 2025-09-27 07:35:04,107 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 79/100 | Train Loss: 0.0146 | Val mean-roc_auc_score: 0.8406
309
+ 2025-09-27 07:35:10,336 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 80/100 | Train Loss: 0.0175 | Val mean-roc_auc_score: 0.8375
310
+ 2025-09-27 07:35:19,445 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 81/100 | Train Loss: 0.0210 | Val mean-roc_auc_score: 0.8415
311
+ 2025-09-27 07:35:28,619 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 82/100 | Train Loss: 0.0167 | Val mean-roc_auc_score: 0.8422
312
+ 2025-09-27 07:35:34,887 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 83/100 | Train Loss: 0.0168 | Val mean-roc_auc_score: 0.8414
313
+ 2025-09-27 07:35:43,777 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 84/100 | Train Loss: 0.0367 | Val mean-roc_auc_score: 0.8413
314
+ 2025-09-27 07:35:52,830 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 85/100 | Train Loss: 0.0238 | Val mean-roc_auc_score: 0.8313
315
+ 2025-09-27 07:35:59,070 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 86/100 | Train Loss: 0.0195 | Val mean-roc_auc_score: 0.8322
316
+ 2025-09-27 07:36:08,404 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 87/100 | Train Loss: 0.0193 | Val mean-roc_auc_score: 0.8313
317
+ 2025-09-27 07:36:17,305 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 88/100 | Train Loss: 0.0248 | Val mean-roc_auc_score: 0.8286
318
+ 2025-09-27 07:36:26,302 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 89/100 | Train Loss: 0.0169 | Val mean-roc_auc_score: 0.8296
319
+ 2025-09-27 07:36:32,466 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 90/100 | Train Loss: 0.0219 | Val mean-roc_auc_score: 0.8365
320
+ 2025-09-27 07:36:41,478 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 91/100 | Train Loss: 0.0263 | Val mean-roc_auc_score: 0.8296
321
+ 2025-09-27 07:36:50,787 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 92/100 | Train Loss: 0.0167 | Val mean-roc_auc_score: 0.8320
322
+ 2025-09-27 07:36:58,250 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 93/100 | Train Loss: 0.0174 | Val mean-roc_auc_score: 0.8339
323
+ 2025-09-27 07:37:07,293 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 94/100 | Train Loss: 0.0082 | Val mean-roc_auc_score: 0.8350
324
+ 2025-09-27 07:37:16,237 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 95/100 | Train Loss: 0.0151 | Val mean-roc_auc_score: 0.8341
325
+ 2025-09-27 07:37:22,635 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 96/100 | Train Loss: 0.0150 | Val mean-roc_auc_score: 0.8342
326
+ 2025-09-27 07:37:31,941 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 97/100 | Train Loss: 0.0174 | Val mean-roc_auc_score: 0.8356
327
+ 2025-09-27 07:37:40,998 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 98/100 | Train Loss: 0.0154 | Val mean-roc_auc_score: 0.8353
328
+ 2025-09-27 07:37:47,251 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 99/100 | Train Loss: 0.0142 | Val mean-roc_auc_score: 0.8337
329
+ 2025-09-27 07:37:56,435 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 100/100 | Train Loss: 0.0150 | Val mean-roc_auc_score: 0.8347
330
+ 2025-09-27 07:37:57,242 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Test mean-roc_auc_score: 0.7886
331
+ 2025-09-27 07:37:57,576 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Final Triplicate Test Results — Avg mean-roc_auc_score: 0.8132, Std Dev: 0.0195
logs_modchembert_regression_ModChemBERT-MLM-DAPT-TAFT-OPT/modchembert_deepchem_splits_run_adme_microsom_stab_h_epochs100_batch_size32_20250926_053902.log ADDED
@@ -0,0 +1,361 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ 2025-09-26 05:39:02,792 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Running benchmark for dataset: adme_microsom_stab_h
2
+ 2025-09-26 05:39:02,792 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - dataset: adme_microsom_stab_h, tasks: ['y'], epochs: 100, learning rate: 3e-05, transform: True
3
+ 2025-09-26 05:39:02,796 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Starting triplicate run 1 for dataset adme_microsom_stab_h at 2025-09-26_05-39-02
4
+ 2025-09-26 05:39:08,901 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 1/100 | Train Loss: 0.7639 | Val rms_score: 0.4001
5
+ 2025-09-26 05:39:08,901 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Global step of best model: 54
6
+ 2025-09-26 05:39:09,803 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Best model saved at epoch 1 with val rms_score: 0.4001
7
+ 2025-09-26 05:39:16,266 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 2/100 | Train Loss: 0.4219 | Val rms_score: 0.3763
8
+ 2025-09-26 05:39:16,470 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Global step of best model: 108
9
+ 2025-09-26 05:39:17,096 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Best model saved at epoch 2 with val rms_score: 0.3763
10
+ 2025-09-26 05:39:24,110 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 3/100 | Train Loss: 0.4236 | Val rms_score: 0.3929
11
+ 2025-09-26 05:39:31,736 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 4/100 | Train Loss: 0.3086 | Val rms_score: 0.3813
12
+ 2025-09-26 05:39:36,167 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 5/100 | Train Loss: 0.2234 | Val rms_score: 0.3830
13
+ 2025-09-26 05:39:44,049 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 6/100 | Train Loss: 0.1406 | Val rms_score: 0.3773
14
+ 2025-09-26 05:39:52,957 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 7/100 | Train Loss: 0.1163 | Val rms_score: 0.3875
15
+ 2025-09-26 05:40:01,593 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 8/100 | Train Loss: 0.1016 | Val rms_score: 0.3743
16
+ 2025-09-26 05:40:01,752 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Global step of best model: 432
17
+ 2025-09-26 05:40:02,415 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Best model saved at epoch 8 with val rms_score: 0.3743
18
+ 2025-09-26 05:40:08,790 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 9/100 | Train Loss: 0.0775 | Val rms_score: 0.3693
19
+ 2025-09-26 05:40:09,018 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Global step of best model: 486
20
+ 2025-09-26 05:40:09,687 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Best model saved at epoch 9 with val rms_score: 0.3693
21
+ 2025-09-26 05:40:18,045 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 10/100 | Train Loss: 0.0676 | Val rms_score: 0.3863
22
+ 2025-09-26 05:40:26,418 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 11/100 | Train Loss: 0.0567 | Val rms_score: 0.3933
23
+ 2025-09-26 05:40:34,243 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 12/100 | Train Loss: 0.0589 | Val rms_score: 0.3817
24
+ 2025-09-26 05:40:39,302 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 13/100 | Train Loss: 0.0361 | Val rms_score: 0.3839
25
+ 2025-09-26 05:40:47,737 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 14/100 | Train Loss: 0.0469 | Val rms_score: 0.3810
26
+ 2025-09-26 05:40:56,370 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 15/100 | Train Loss: 0.0365 | Val rms_score: 0.3766
27
+ 2025-09-26 05:41:04,238 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 16/100 | Train Loss: 0.0394 | Val rms_score: 0.3749
28
+ 2025-09-26 05:41:10,299 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 17/100 | Train Loss: 0.0380 | Val rms_score: 0.3869
29
+ 2025-09-26 05:41:18,309 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 18/100 | Train Loss: 0.0333 | Val rms_score: 0.3808
30
+ 2025-09-26 05:41:27,042 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 19/100 | Train Loss: 0.0320 | Val rms_score: 0.3813
31
+ 2025-09-26 05:41:35,320 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 20/100 | Train Loss: 0.0330 | Val rms_score: 0.3753
32
+ 2025-09-26 05:41:40,980 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 21/100 | Train Loss: 0.0324 | Val rms_score: 0.3777
33
+ 2025-09-26 05:41:48,902 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 22/100 | Train Loss: 0.0320 | Val rms_score: 0.3703
34
+ 2025-09-26 05:41:57,282 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 23/100 | Train Loss: 0.0320 | Val rms_score: 0.3826
35
+ 2025-09-26 05:42:05,647 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 24/100 | Train Loss: 0.0302 | Val rms_score: 0.3747
36
+ 2025-09-26 05:42:10,759 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 25/100 | Train Loss: 0.0309 | Val rms_score: 0.3760
37
+ 2025-09-26 05:42:18,533 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 26/100 | Train Loss: 0.0291 | Val rms_score: 0.3676
38
+ 2025-09-26 05:42:18,993 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Global step of best model: 1404
39
+ 2025-09-26 05:42:19,633 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Best model saved at epoch 26 with val rms_score: 0.3676
40
+ 2025-09-26 05:42:27,798 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 27/100 | Train Loss: 0.0289 | Val rms_score: 0.3744
41
+ 2025-09-26 05:42:35,334 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 28/100 | Train Loss: 0.0270 | Val rms_score: 0.3838
42
+ 2025-09-26 05:42:41,013 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 29/100 | Train Loss: 0.0276 | Val rms_score: 0.3796
43
+ 2025-09-26 05:42:49,317 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 30/100 | Train Loss: 0.0262 | Val rms_score: 0.3764
44
+ 2025-09-26 05:42:57,278 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 31/100 | Train Loss: 0.0258 | Val rms_score: 0.3766
45
+ 2025-09-26 05:43:05,933 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 32/100 | Train Loss: 0.0240 | Val rms_score: 0.3749
46
+ 2025-09-26 05:43:11,609 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 33/100 | Train Loss: 0.0258 | Val rms_score: 0.3735
47
+ 2025-09-26 05:43:19,652 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 34/100 | Train Loss: 0.0247 | Val rms_score: 0.3701
48
+ 2025-09-26 05:43:27,598 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 35/100 | Train Loss: 0.0246 | Val rms_score: 0.3770
49
+ 2025-09-26 05:43:35,403 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 36/100 | Train Loss: 0.0243 | Val rms_score: 0.3691
50
+ 2025-09-26 05:43:41,011 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 37/100 | Train Loss: 0.0233 | Val rms_score: 0.3746
51
+ 2025-09-26 05:43:50,093 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 38/100 | Train Loss: 0.0242 | Val rms_score: 0.3695
52
+ 2025-09-26 05:43:57,695 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 39/100 | Train Loss: 0.0191 | Val rms_score: 0.3714
53
+ 2025-09-26 05:44:05,133 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 40/100 | Train Loss: 0.0229 | Val rms_score: 0.3744
54
+ 2025-09-26 05:44:10,754 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 41/100 | Train Loss: 0.0199 | Val rms_score: 0.3756
55
+ 2025-09-26 05:44:18,897 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 42/100 | Train Loss: 0.0221 | Val rms_score: 0.3719
56
+ 2025-09-26 05:44:27,117 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 43/100 | Train Loss: 0.0218 | Val rms_score: 0.3769
57
+ 2025-09-26 05:44:35,608 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 44/100 | Train Loss: 0.0220 | Val rms_score: 0.3712
58
+ 2025-09-26 05:44:41,830 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 45/100 | Train Loss: 0.0190 | Val rms_score: 0.3760
59
+ 2025-09-26 05:44:50,433 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 46/100 | Train Loss: 0.0217 | Val rms_score: 0.3714
60
+ 2025-09-26 05:44:59,346 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 47/100 | Train Loss: 0.0200 | Val rms_score: 0.3707
61
+ 2025-09-26 05:45:04,531 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 48/100 | Train Loss: 0.0204 | Val rms_score: 0.3771
62
+ 2025-09-26 05:45:12,186 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 49/100 | Train Loss: 0.0189 | Val rms_score: 0.3748
63
+ 2025-09-26 05:45:20,338 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 50/100 | Train Loss: 0.0195 | Val rms_score: 0.3723
64
+ 2025-09-26 05:45:28,487 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 51/100 | Train Loss: 0.0198 | Val rms_score: 0.3740
65
+ 2025-09-26 05:45:33,832 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 52/100 | Train Loss: 0.0198 | Val rms_score: 0.3731
66
+ 2025-09-26 05:45:41,321 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 53/100 | Train Loss: 0.0188 | Val rms_score: 0.3739
67
+ 2025-09-26 05:45:49,203 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 54/100 | Train Loss: 0.0176 | Val rms_score: 0.3719
68
+ 2025-09-26 05:45:56,382 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 55/100 | Train Loss: 0.0191 | Val rms_score: 0.3762
69
+ 2025-09-26 05:46:04,974 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 56/100 | Train Loss: 0.0199 | Val rms_score: 0.3709
70
+ 2025-09-26 05:46:10,636 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 57/100 | Train Loss: 0.0190 | Val rms_score: 0.3705
71
+ 2025-09-26 05:46:18,178 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 58/100 | Train Loss: 0.0194 | Val rms_score: 0.3693
72
+ 2025-09-26 05:46:25,373 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 59/100 | Train Loss: 0.0174 | Val rms_score: 0.3720
73
+ 2025-09-26 05:46:33,389 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 60/100 | Train Loss: 0.0186 | Val rms_score: 0.3704
74
+ 2025-09-26 05:46:39,198 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 61/100 | Train Loss: 0.0197 | Val rms_score: 0.3715
75
+ 2025-09-26 05:46:47,749 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 62/100 | Train Loss: 0.0190 | Val rms_score: 0.3686
76
+ 2025-09-26 05:46:55,758 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 63/100 | Train Loss: 0.0173 | Val rms_score: 0.3738
77
+ 2025-09-26 05:47:03,663 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 64/100 | Train Loss: 0.0166 | Val rms_score: 0.3696
78
+ 2025-09-26 05:47:09,066 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 65/100 | Train Loss: 0.0161 | Val rms_score: 0.3692
79
+ 2025-09-26 05:47:17,045 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 66/100 | Train Loss: 0.0179 | Val rms_score: 0.3727
80
+ 2025-09-26 05:47:25,352 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 67/100 | Train Loss: 0.0162 | Val rms_score: 0.3707
81
+ 2025-09-26 05:47:33,099 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 68/100 | Train Loss: 0.0173 | Val rms_score: 0.3726
82
+ 2025-09-26 05:47:38,242 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 69/100 | Train Loss: 0.0164 | Val rms_score: 0.3715
83
+ 2025-09-26 05:47:45,775 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 70/100 | Train Loss: 0.0162 | Val rms_score: 0.3726
84
+ 2025-09-26 05:47:53,445 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 71/100 | Train Loss: 0.0162 | Val rms_score: 0.3690
85
+ 2025-09-26 05:48:01,106 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 72/100 | Train Loss: 0.0174 | Val rms_score: 0.3678
86
+ 2025-09-26 05:48:06,409 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 73/100 | Train Loss: 0.0168 | Val rms_score: 0.3695
87
+ 2025-09-26 05:48:13,743 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 74/100 | Train Loss: 0.0181 | Val rms_score: 0.3704
88
+ 2025-09-26 05:48:22,163 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 75/100 | Train Loss: 0.0170 | Val rms_score: 0.3719
89
+ 2025-09-26 05:48:30,146 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 76/100 | Train Loss: 0.0199 | Val rms_score: 0.3759
90
+ 2025-09-26 05:48:35,298 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 77/100 | Train Loss: 0.0162 | Val rms_score: 0.3728
91
+ 2025-09-26 05:48:42,970 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 78/100 | Train Loss: 0.0187 | Val rms_score: 0.3693
92
+ 2025-09-26 05:48:50,837 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 79/100 | Train Loss: 0.0159 | Val rms_score: 0.3698
93
+ 2025-09-26 05:48:58,755 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 80/100 | Train Loss: 0.0169 | Val rms_score: 0.3725
94
+ 2025-09-26 05:49:03,972 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 81/100 | Train Loss: 0.0167 | Val rms_score: 0.3727
95
+ 2025-09-26 05:49:11,946 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 82/100 | Train Loss: 0.0160 | Val rms_score: 0.3716
96
+ 2025-09-26 05:49:19,527 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 83/100 | Train Loss: 0.0169 | Val rms_score: 0.3724
97
+ 2025-09-26 05:49:27,538 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 84/100 | Train Loss: 0.0169 | Val rms_score: 0.3727
98
+ 2025-09-26 05:49:32,555 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 85/100 | Train Loss: 0.0159 | Val rms_score: 0.3675
99
+ 2025-09-26 05:49:32,734 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Global step of best model: 4590
100
+ 2025-09-26 05:49:33,386 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Best model saved at epoch 85 with val rms_score: 0.3675
101
+ 2025-09-26 05:49:40,920 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 86/100 | Train Loss: 0.0147 | Val rms_score: 0.3700
102
+ 2025-09-26 05:49:48,974 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 87/100 | Train Loss: 0.0157 | Val rms_score: 0.3672
103
+ 2025-09-26 05:49:49,215 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Global step of best model: 4698
104
+ 2025-09-26 05:49:49,849 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Best model saved at epoch 87 with val rms_score: 0.3672
105
+ 2025-09-26 05:49:57,559 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 88/100 | Train Loss: 0.0177 | Val rms_score: 0.3700
106
+ 2025-09-26 05:50:02,619 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 89/100 | Train Loss: 0.0178 | Val rms_score: 0.3695
107
+ 2025-09-26 05:50:10,308 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 90/100 | Train Loss: 0.0149 | Val rms_score: 0.3672
108
+ 2025-09-26 05:50:18,156 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 91/100 | Train Loss: 0.0168 | Val rms_score: 0.3748
109
+ 2025-09-26 05:50:26,008 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 92/100 | Train Loss: 0.0171 | Val rms_score: 0.3704
110
+ 2025-09-26 05:50:32,676 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 93/100 | Train Loss: 0.0154 | Val rms_score: 0.3712
111
+ 2025-09-26 05:50:40,754 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 94/100 | Train Loss: 0.0156 | Val rms_score: 0.3700
112
+ 2025-09-26 05:50:49,125 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 95/100 | Train Loss: 0.0140 | Val rms_score: 0.3723
113
+ 2025-09-26 05:50:57,268 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 96/100 | Train Loss: 0.0156 | Val rms_score: 0.3722
114
+ 2025-09-26 05:51:03,525 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 97/100 | Train Loss: 0.0156 | Val rms_score: 0.3683
115
+ 2025-09-26 05:51:11,184 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 98/100 | Train Loss: 0.0159 | Val rms_score: 0.3727
116
+ 2025-09-26 05:51:19,158 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 99/100 | Train Loss: 0.0143 | Val rms_score: 0.3718
117
+ 2025-09-26 05:51:26,315 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 100/100 | Train Loss: 0.0140 | Val rms_score: 0.3710
118
+ 2025-09-26 05:51:26,970 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Test rms_score: 0.4192
119
+ 2025-09-26 05:51:27,294 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Starting triplicate run 2 for dataset adme_microsom_stab_h at 2025-09-26_05-51-27
120
+ 2025-09-26 05:51:31,086 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 1/100 | Train Loss: 0.7500 | Val rms_score: 0.4064
121
+ 2025-09-26 05:51:31,087 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Global step of best model: 54
122
+ 2025-09-26 05:51:32,231 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Best model saved at epoch 1 with val rms_score: 0.4064
123
+ 2025-09-26 05:51:40,124 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 2/100 | Train Loss: 0.5625 | Val rms_score: 0.3944
124
+ 2025-09-26 05:51:40,346 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Global step of best model: 108
125
+ 2025-09-26 05:51:41,099 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Best model saved at epoch 2 with val rms_score: 0.3944
126
+ 2025-09-26 05:51:48,855 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 3/100 | Train Loss: 0.4306 | Val rms_score: 0.3899
127
+ 2025-09-26 05:51:49,085 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Global step of best model: 162
128
+ 2025-09-26 05:51:49,768 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Best model saved at epoch 3 with val rms_score: 0.3899
129
+ 2025-09-26 05:51:57,817 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 4/100 | Train Loss: 0.3281 | Val rms_score: 0.3933
130
+ 2025-09-26 05:52:03,044 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 5/100 | Train Loss: 0.2350 | Val rms_score: 0.3852
131
+ 2025-09-26 05:52:03,290 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Global step of best model: 270
132
+ 2025-09-26 05:52:04,022 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Best model saved at epoch 5 with val rms_score: 0.3852
133
+ 2025-09-26 05:52:11,797 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 6/100 | Train Loss: 0.1549 | Val rms_score: 0.3950
134
+ 2025-09-26 05:52:19,591 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 7/100 | Train Loss: 0.1296 | Val rms_score: 0.3757
135
+ 2025-09-26 05:52:19,804 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Global step of best model: 378
136
+ 2025-09-26 05:52:20,551 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Best model saved at epoch 7 with val rms_score: 0.3757
137
+ 2025-09-26 05:52:28,250 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 8/100 | Train Loss: 0.1011 | Val rms_score: 0.3755
138
+ 2025-09-26 05:52:28,472 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Global step of best model: 432
139
+ 2025-09-26 05:52:29,140 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Best model saved at epoch 8 with val rms_score: 0.3755
140
+ 2025-09-26 05:52:34,719 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 9/100 | Train Loss: 0.0804 | Val rms_score: 0.3808
141
+ 2025-09-26 05:52:42,313 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 10/100 | Train Loss: 0.0707 | Val rms_score: 0.3807
142
+ 2025-09-26 05:52:49,464 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 11/100 | Train Loss: 0.0587 | Val rms_score: 0.3813
143
+ 2025-09-26 05:52:57,465 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 12/100 | Train Loss: 0.0570 | Val rms_score: 0.3731
144
+ 2025-09-26 05:52:57,719 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Global step of best model: 648
145
+ 2025-09-26 05:52:58,449 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Best model saved at epoch 12 with val rms_score: 0.3731
146
+ 2025-09-26 05:53:03,476 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 13/100 | Train Loss: 0.0928 | Val rms_score: 0.3836
147
+ 2025-09-26 05:53:11,192 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 14/100 | Train Loss: 0.0527 | Val rms_score: 0.3774
148
+ 2025-09-26 05:53:19,222 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 15/100 | Train Loss: 0.0434 | Val rms_score: 0.3774
149
+ 2025-09-26 05:53:26,784 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 16/100 | Train Loss: 0.0414 | Val rms_score: 0.3729
150
+ 2025-09-26 05:53:27,341 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Global step of best model: 864
151
+ 2025-09-26 05:53:28,021 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Best model saved at epoch 16 with val rms_score: 0.3729
152
+ 2025-09-26 05:53:33,183 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 17/100 | Train Loss: 0.0408 | Val rms_score: 0.3850
153
+ 2025-09-26 05:53:40,386 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 18/100 | Train Loss: 0.0422 | Val rms_score: 0.3873
154
+ 2025-09-26 05:53:48,776 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 19/100 | Train Loss: 0.0359 | Val rms_score: 0.3760
155
+ 2025-09-26 05:53:55,688 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 20/100 | Train Loss: 0.0352 | Val rms_score: 0.3850
156
+ 2025-09-26 05:54:01,145 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 21/100 | Train Loss: 0.0315 | Val rms_score: 0.3657
157
+ 2025-09-26 05:54:01,669 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Global step of best model: 1134
158
+ 2025-09-26 05:54:02,355 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Best model saved at epoch 21 with val rms_score: 0.3657
159
+ 2025-09-26 05:54:09,879 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 22/100 | Train Loss: 0.0334 | Val rms_score: 0.3673
160
+ 2025-09-26 05:54:17,013 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 23/100 | Train Loss: 0.0294 | Val rms_score: 0.3695
161
+ 2025-09-26 05:54:24,325 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 24/100 | Train Loss: 0.0299 | Val rms_score: 0.3651
162
+ 2025-09-26 05:54:24,682 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Global step of best model: 1296
163
+ 2025-09-26 05:54:25,322 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Best model saved at epoch 24 with val rms_score: 0.3651
164
+ 2025-09-26 05:54:30,810 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 25/100 | Train Loss: 0.0297 | Val rms_score: 0.3665
165
+ 2025-09-26 05:54:37,777 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 26/100 | Train Loss: 0.0317 | Val rms_score: 0.3765
166
+ 2025-09-26 05:54:45,310 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 27/100 | Train Loss: 0.0275 | Val rms_score: 0.3691
167
+ 2025-09-26 05:54:53,174 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 28/100 | Train Loss: 0.0247 | Val rms_score: 0.3658
168
+ 2025-09-26 05:55:00,637 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 29/100 | Train Loss: 0.0275 | Val rms_score: 0.3709
169
+ 2025-09-26 05:55:05,997 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 30/100 | Train Loss: 0.0268 | Val rms_score: 0.3747
170
+ 2025-09-26 05:55:14,345 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 31/100 | Train Loss: 0.0236 | Val rms_score: 0.3663
171
+ 2025-09-26 05:55:22,567 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 32/100 | Train Loss: 0.0283 | Val rms_score: 0.3683
172
+ 2025-09-26 05:55:29,723 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 33/100 | Train Loss: 0.0265 | Val rms_score: 0.3663
173
+ 2025-09-26 05:55:34,745 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 34/100 | Train Loss: 0.0278 | Val rms_score: 0.3703
174
+ 2025-09-26 05:55:42,324 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 35/100 | Train Loss: 0.0245 | Val rms_score: 0.3643
175
+ 2025-09-26 05:55:42,505 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Global step of best model: 1890
176
+ 2025-09-26 05:55:43,165 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Best model saved at epoch 35 with val rms_score: 0.3643
177
+ 2025-09-26 05:55:51,192 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 36/100 | Train Loss: 0.0234 | Val rms_score: 0.3672
178
+ 2025-09-26 05:55:59,861 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 37/100 | Train Loss: 0.0243 | Val rms_score: 0.3700
179
+ 2025-09-26 05:56:05,590 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 38/100 | Train Loss: 0.0255 | Val rms_score: 0.3678
180
+ 2025-09-26 05:56:13,220 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 39/100 | Train Loss: 0.0208 | Val rms_score: 0.3705
181
+ 2025-09-26 05:56:20,918 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 40/100 | Train Loss: 0.0223 | Val rms_score: 0.3597
182
+ 2025-09-26 05:56:21,089 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Global step of best model: 2160
183
+ 2025-09-26 05:56:21,927 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Best model saved at epoch 40 with val rms_score: 0.3597
184
+ 2025-09-26 05:56:29,291 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 41/100 | Train Loss: 0.0243 | Val rms_score: 0.3727
185
+ 2025-09-26 05:56:35,076 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 42/100 | Train Loss: 0.0223 | Val rms_score: 0.3675
186
+ 2025-09-26 05:56:43,319 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 43/100 | Train Loss: 0.0221 | Val rms_score: 0.3707
187
+ 2025-09-26 05:56:51,305 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 44/100 | Train Loss: 0.0221 | Val rms_score: 0.3695
188
+ 2025-09-26 05:56:59,288 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 45/100 | Train Loss: 0.0221 | Val rms_score: 0.3714
189
+ 2025-09-26 05:57:04,893 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 46/100 | Train Loss: 0.0198 | Val rms_score: 0.3654
190
+ 2025-09-26 05:57:13,223 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 47/100 | Train Loss: 0.0190 | Val rms_score: 0.3697
191
+ 2025-09-26 05:57:21,405 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 48/100 | Train Loss: 0.0210 | Val rms_score: 0.3698
192
+ 2025-09-26 05:57:29,162 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 49/100 | Train Loss: 0.0212 | Val rms_score: 0.3675
193
+ 2025-09-26 05:57:34,812 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 50/100 | Train Loss: 0.0197 | Val rms_score: 0.3654
194
+ 2025-09-26 05:57:42,931 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 51/100 | Train Loss: 0.0190 | Val rms_score: 0.3723
195
+ 2025-09-26 05:57:51,095 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 52/100 | Train Loss: 0.0187 | Val rms_score: 0.3667
196
+ 2025-09-26 05:57:58,811 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 53/100 | Train Loss: 0.0194 | Val rms_score: 0.3718
197
+ 2025-09-26 05:58:04,084 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 54/100 | Train Loss: 0.0197 | Val rms_score: 0.3718
198
+ 2025-09-26 05:58:11,264 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 55/100 | Train Loss: 0.0173 | Val rms_score: 0.3669
199
+ 2025-09-26 05:58:20,133 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 56/100 | Train Loss: 0.0174 | Val rms_score: 0.3646
200
+ 2025-09-26 05:58:29,190 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 57/100 | Train Loss: 0.0176 | Val rms_score: 0.3675
201
+ 2025-09-26 05:58:34,970 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 58/100 | Train Loss: 0.0192 | Val rms_score: 0.3691
202
+ 2025-09-26 05:58:43,151 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 59/100 | Train Loss: 0.0169 | Val rms_score: 0.3686
203
+ 2025-09-26 05:58:50,824 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 60/100 | Train Loss: 0.0187 | Val rms_score: 0.3731
204
+ 2025-09-26 05:58:58,857 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 61/100 | Train Loss: 0.0184 | Val rms_score: 0.3707
205
+ 2025-09-26 05:59:04,352 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 62/100 | Train Loss: 0.0173 | Val rms_score: 0.3690
206
+ 2025-09-26 05:59:12,253 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 63/100 | Train Loss: 0.0170 | Val rms_score: 0.3698
207
+ 2025-09-26 05:59:20,540 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 64/100 | Train Loss: 0.0179 | Val rms_score: 0.3637
208
+ 2025-09-26 05:59:28,566 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 65/100 | Train Loss: 0.0179 | Val rms_score: 0.3719
209
+ 2025-09-26 05:59:34,186 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 66/100 | Train Loss: 0.0174 | Val rms_score: 0.3699
210
+ 2025-09-26 05:59:43,035 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 67/100 | Train Loss: 0.0195 | Val rms_score: 0.3688
211
+ 2025-09-26 05:59:50,948 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 68/100 | Train Loss: 0.0177 | Val rms_score: 0.3644
212
+ 2025-09-26 05:59:56,580 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 69/100 | Train Loss: 0.0174 | Val rms_score: 0.3688
213
+ 2025-09-26 06:00:04,465 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 70/100 | Train Loss: 0.0167 | Val rms_score: 0.3678
214
+ 2025-09-26 06:00:12,005 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 71/100 | Train Loss: 0.0185 | Val rms_score: 0.3748
215
+ 2025-09-26 06:00:20,495 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 72/100 | Train Loss: 0.0167 | Val rms_score: 0.3673
216
+ 2025-09-26 06:00:28,248 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 73/100 | Train Loss: 0.0171 | Val rms_score: 0.3719
217
+ 2025-09-26 06:00:33,717 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 74/100 | Train Loss: 0.0173 | Val rms_score: 0.3740
218
+ 2025-09-26 06:00:43,030 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 75/100 | Train Loss: 0.0162 | Val rms_score: 0.3749
219
+ 2025-09-26 06:00:50,658 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 76/100 | Train Loss: 0.0193 | Val rms_score: 0.3655
220
+ 2025-09-26 06:00:56,091 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 77/100 | Train Loss: 0.0169 | Val rms_score: 0.3738
221
+ 2025-09-26 06:01:03,646 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 78/100 | Train Loss: 0.0199 | Val rms_score: 0.3684
222
+ 2025-09-26 06:01:11,699 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 79/100 | Train Loss: 0.0162 | Val rms_score: 0.3697
223
+ 2025-09-26 06:01:19,407 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 80/100 | Train Loss: 0.0177 | Val rms_score: 0.3662
224
+ 2025-09-26 06:01:27,963 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 81/100 | Train Loss: 0.0165 | Val rms_score: 0.3670
225
+ 2025-09-26 06:01:33,525 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 82/100 | Train Loss: 0.0149 | Val rms_score: 0.3673
226
+ 2025-09-26 06:01:41,019 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 83/100 | Train Loss: 0.0159 | Val rms_score: 0.3647
227
+ 2025-09-26 06:01:48,459 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 84/100 | Train Loss: 0.0156 | Val rms_score: 0.3682
228
+ 2025-09-26 06:01:55,661 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 85/100 | Train Loss: 0.0156 | Val rms_score: 0.3726
229
+ 2025-09-26 06:02:00,966 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 86/100 | Train Loss: 0.0161 | Val rms_score: 0.3688
230
+ 2025-09-26 06:02:09,020 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 87/100 | Train Loss: 0.0163 | Val rms_score: 0.3707
231
+ 2025-09-26 06:02:17,180 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 88/100 | Train Loss: 0.0159 | Val rms_score: 0.3689
232
+ 2025-09-26 06:02:24,689 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 89/100 | Train Loss: 0.0155 | Val rms_score: 0.3716
233
+ 2025-09-26 06:02:29,645 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 90/100 | Train Loss: 0.0159 | Val rms_score: 0.3658
234
+ 2025-09-26 06:02:37,552 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 91/100 | Train Loss: 0.0173 | Val rms_score: 0.3695
235
+ 2025-09-26 06:02:45,737 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 92/100 | Train Loss: 0.0163 | Val rms_score: 0.3665
236
+ 2025-09-26 06:02:55,070 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 93/100 | Train Loss: 0.0153 | Val rms_score: 0.3694
237
+ 2025-09-26 06:03:00,041 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 94/100 | Train Loss: 0.0152 | Val rms_score: 0.3680
238
+ 2025-09-26 06:03:07,354 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 95/100 | Train Loss: 0.0156 | Val rms_score: 0.3697
239
+ 2025-09-26 06:03:14,918 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 96/100 | Train Loss: 0.0152 | Val rms_score: 0.3683
240
+ 2025-09-26 06:03:22,548 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 97/100 | Train Loss: 0.0151 | Val rms_score: 0.3679
241
+ 2025-09-26 06:03:28,048 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 98/100 | Train Loss: 0.0141 | Val rms_score: 0.3678
242
+ 2025-09-26 06:03:36,345 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 99/100 | Train Loss: 0.0145 | Val rms_score: 0.3703
243
+ 2025-09-26 06:03:44,040 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 100/100 | Train Loss: 0.0140 | Val rms_score: 0.3706
244
+ 2025-09-26 06:03:44,649 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Test rms_score: 0.4259
245
+ 2025-09-26 06:03:45,142 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Starting triplicate run 3 for dataset adme_microsom_stab_h at 2025-09-26_06-03-45
246
+ 2025-09-26 06:03:52,072 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 1/100 | Train Loss: 0.7361 | Val rms_score: 0.4059
247
+ 2025-09-26 06:03:52,072 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Global step of best model: 54
248
+ 2025-09-26 06:03:53,095 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Best model saved at epoch 1 with val rms_score: 0.4059
249
+ 2025-09-26 06:03:58,214 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 2/100 | Train Loss: 0.5117 | Val rms_score: 0.4009
250
+ 2025-09-26 06:03:58,428 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Global step of best model: 108
251
+ 2025-09-26 06:03:59,084 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Best model saved at epoch 2 with val rms_score: 0.4009
252
+ 2025-09-26 06:04:06,768 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 3/100 | Train Loss: 0.4074 | Val rms_score: 0.3795
253
+ 2025-09-26 06:04:06,975 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Global step of best model: 162
254
+ 2025-09-26 06:04:07,663 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Best model saved at epoch 3 with val rms_score: 0.3795
255
+ 2025-09-26 06:04:15,158 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 4/100 | Train Loss: 0.2715 | Val rms_score: 0.3870
256
+ 2025-09-26 06:04:22,152 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 5/100 | Train Loss: 0.2095 | Val rms_score: 0.3828
257
+ 2025-09-26 06:04:26,707 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 6/100 | Train Loss: 0.1615 | Val rms_score: 0.3745
258
+ 2025-09-26 06:04:27,227 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Global step of best model: 324
259
+ 2025-09-26 06:04:27,818 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Best model saved at epoch 6 with val rms_score: 0.3745
260
+ 2025-09-26 06:04:35,448 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 7/100 | Train Loss: 0.1273 | Val rms_score: 0.3814
261
+ 2025-09-26 06:04:43,193 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 8/100 | Train Loss: 0.0957 | Val rms_score: 0.3949
262
+ 2025-09-26 06:04:50,334 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 9/100 | Train Loss: 0.0775 | Val rms_score: 0.3802
263
+ 2025-09-26 06:04:55,365 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 10/100 | Train Loss: 0.0703 | Val rms_score: 0.3780
264
+ 2025-09-26 06:05:02,750 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 11/100 | Train Loss: 0.0605 | Val rms_score: 0.3810
265
+ 2025-09-26 06:05:11,321 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 12/100 | Train Loss: 0.0531 | Val rms_score: 0.3892
266
+ 2025-09-26 06:05:19,108 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 13/100 | Train Loss: 0.0547 | Val rms_score: 0.3748
267
+ 2025-09-26 06:05:24,336 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 14/100 | Train Loss: 0.0460 | Val rms_score: 0.3906
268
+ 2025-09-26 06:05:32,184 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 15/100 | Train Loss: 0.0555 | Val rms_score: 0.3942
269
+ 2025-09-26 06:05:39,805 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 16/100 | Train Loss: 0.0428 | Val rms_score: 0.3849
270
+ 2025-09-26 06:05:47,946 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 17/100 | Train Loss: 0.0414 | Val rms_score: 0.3891
271
+ 2025-09-26 06:05:53,432 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 18/100 | Train Loss: 0.0388 | Val rms_score: 0.3754
272
+ 2025-09-26 06:06:01,819 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 19/100 | Train Loss: 0.0376 | Val rms_score: 0.3778
273
+ 2025-09-26 06:06:09,050 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 20/100 | Train Loss: 0.0373 | Val rms_score: 0.3726
274
+ 2025-09-26 06:06:09,221 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Global step of best model: 1080
275
+ 2025-09-26 06:06:09,871 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Best model saved at epoch 20 with val rms_score: 0.3726
276
+ 2025-09-26 06:06:17,572 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 21/100 | Train Loss: 0.0326 | Val rms_score: 0.3826
277
+ 2025-09-26 06:06:23,725 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 22/100 | Train Loss: 0.0307 | Val rms_score: 0.3733
278
+ 2025-09-26 06:06:31,161 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 23/100 | Train Loss: 0.0314 | Val rms_score: 0.3744
279
+ 2025-09-26 06:06:39,171 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 24/100 | Train Loss: 0.0331 | Val rms_score: 0.3699
280
+ 2025-09-26 06:06:39,341 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Global step of best model: 1296
281
+ 2025-09-26 06:06:39,986 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Best model saved at epoch 24 with val rms_score: 0.3699
282
+ 2025-09-26 06:06:47,893 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 25/100 | Train Loss: 0.0298 | Val rms_score: 0.3866
283
+ 2025-09-26 06:06:53,653 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 26/100 | Train Loss: 0.0293 | Val rms_score: 0.3752
284
+ 2025-09-26 06:07:02,174 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 27/100 | Train Loss: 0.0307 | Val rms_score: 0.3827
285
+ 2025-09-26 06:07:10,281 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 28/100 | Train Loss: 0.0257 | Val rms_score: 0.3853
286
+ 2025-09-26 06:07:18,025 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 29/100 | Train Loss: 0.0284 | Val rms_score: 0.3718
287
+ 2025-09-26 06:07:23,392 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 30/100 | Train Loss: 0.0260 | Val rms_score: 0.3751
288
+ 2025-09-26 06:07:30,675 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 31/100 | Train Loss: 0.0268 | Val rms_score: 0.3736
289
+ 2025-09-26 06:07:39,086 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 32/100 | Train Loss: 0.0255 | Val rms_score: 0.3820
290
+ 2025-09-26 06:07:46,383 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 33/100 | Train Loss: 0.0258 | Val rms_score: 0.3832
291
+ 2025-09-26 06:07:54,259 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 34/100 | Train Loss: 0.0231 | Val rms_score: 0.3845
292
+ 2025-09-26 06:07:58,636 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 35/100 | Train Loss: 0.0250 | Val rms_score: 0.3800
293
+ 2025-09-26 06:08:06,676 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 36/100 | Train Loss: 0.0243 | Val rms_score: 0.3837
294
+ 2025-09-26 06:08:14,487 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 37/100 | Train Loss: 0.0231 | Val rms_score: 0.3778
295
+ 2025-09-26 06:08:23,563 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 38/100 | Train Loss: 0.0236 | Val rms_score: 0.3712
296
+ 2025-09-26 06:08:28,665 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 39/100 | Train Loss: 0.0213 | Val rms_score: 0.3800
297
+ 2025-09-26 06:08:36,194 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 40/100 | Train Loss: 0.0240 | Val rms_score: 0.3794
298
+ 2025-09-26 06:08:43,862 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 41/100 | Train Loss: 0.0247 | Val rms_score: 0.3817
299
+ 2025-09-26 06:08:52,548 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 42/100 | Train Loss: 0.0220 | Val rms_score: 0.3725
300
+ 2025-09-26 06:08:58,240 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 43/100 | Train Loss: 0.0249 | Val rms_score: 0.3725
301
+ 2025-09-26 06:09:06,020 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 44/100 | Train Loss: 0.0224 | Val rms_score: 0.3681
302
+ 2025-09-26 06:09:06,212 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Global step of best model: 2376
303
+ 2025-09-26 06:09:06,939 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Best model saved at epoch 44 with val rms_score: 0.3681
304
+ 2025-09-26 06:09:14,962 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 45/100 | Train Loss: 0.0223 | Val rms_score: 0.3745
305
+ 2025-09-26 06:09:22,589 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 46/100 | Train Loss: 0.0194 | Val rms_score: 0.3749
306
+ 2025-09-26 06:09:29,071 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 47/100 | Train Loss: 0.0205 | Val rms_score: 0.3809
307
+ 2025-09-26 06:09:37,047 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 48/100 | Train Loss: 0.0223 | Val rms_score: 0.3743
308
+ 2025-09-26 06:09:44,664 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 49/100 | Train Loss: 0.0202 | Val rms_score: 0.3756
309
+ 2025-09-26 06:09:52,806 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 50/100 | Train Loss: 0.0203 | Val rms_score: 0.3688
310
+ 2025-09-26 06:09:58,428 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 51/100 | Train Loss: 0.0204 | Val rms_score: 0.3791
311
+ 2025-09-26 06:10:06,472 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 52/100 | Train Loss: 0.0214 | Val rms_score: 0.3777
312
+ 2025-09-26 06:10:14,065 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 53/100 | Train Loss: 0.0191 | Val rms_score: 0.3720
313
+ 2025-09-26 06:10:21,506 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 54/100 | Train Loss: 0.0176 | Val rms_score: 0.3742
314
+ 2025-09-26 06:10:26,820 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 55/100 | Train Loss: 0.0197 | Val rms_score: 0.3706
315
+ 2025-09-26 06:10:36,481 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 56/100 | Train Loss: 0.0205 | Val rms_score: 0.3730
316
+ 2025-09-26 06:10:44,336 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 57/100 | Train Loss: 0.0192 | Val rms_score: 0.3727
317
+ 2025-09-26 06:10:52,059 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 58/100 | Train Loss: 0.0177 | Val rms_score: 0.3748
318
+ 2025-09-26 06:10:56,862 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 59/100 | Train Loss: 0.0178 | Val rms_score: 0.3744
319
+ 2025-09-26 06:11:04,893 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 60/100 | Train Loss: 0.0176 | Val rms_score: 0.3722
320
+ 2025-09-26 06:11:13,138 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 61/100 | Train Loss: 0.0181 | Val rms_score: 0.3754
321
+ 2025-09-26 06:11:21,302 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 62/100 | Train Loss: 0.0177 | Val rms_score: 0.3721
322
+ 2025-09-26 06:11:26,820 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 63/100 | Train Loss: 0.0165 | Val rms_score: 0.3768
323
+ 2025-09-26 06:11:34,557 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 64/100 | Train Loss: 0.0194 | Val rms_score: 0.3754
324
+ 2025-09-26 06:11:42,405 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 65/100 | Train Loss: 0.0188 | Val rms_score: 0.3742
325
+ 2025-09-26 06:11:50,135 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 66/100 | Train Loss: 0.0179 | Val rms_score: 0.3788
326
+ 2025-09-26 06:11:55,746 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 67/100 | Train Loss: 0.0176 | Val rms_score: 0.3776
327
+ 2025-09-26 06:12:03,442 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 68/100 | Train Loss: 0.0187 | Val rms_score: 0.3747
328
+ 2025-09-26 06:12:10,800 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 69/100 | Train Loss: 0.0175 | Val rms_score: 0.3729
329
+ 2025-09-26 06:12:18,629 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 70/100 | Train Loss: 0.0177 | Val rms_score: 0.3749
330
+ 2025-09-26 06:12:23,668 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 71/100 | Train Loss: 0.0165 | Val rms_score: 0.3754
331
+ 2025-09-26 06:12:31,642 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 72/100 | Train Loss: 0.0163 | Val rms_score: 0.3713
332
+ 2025-09-26 06:12:39,762 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 73/100 | Train Loss: 0.0173 | Val rms_score: 0.3703
333
+ 2025-09-26 06:12:47,680 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 74/100 | Train Loss: 0.0153 | Val rms_score: 0.3759
334
+ 2025-09-26 06:12:54,263 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 75/100 | Train Loss: 0.0166 | Val rms_score: 0.3729
335
+ 2025-09-26 06:13:02,470 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 76/100 | Train Loss: 0.0160 | Val rms_score: 0.3722
336
+ 2025-09-26 06:13:10,718 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 77/100 | Train Loss: 0.0167 | Val rms_score: 0.3710
337
+ 2025-09-26 06:13:19,188 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 78/100 | Train Loss: 0.0191 | Val rms_score: 0.3713
338
+ 2025-09-26 06:13:24,934 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 79/100 | Train Loss: 0.0162 | Val rms_score: 0.3729
339
+ 2025-09-26 06:13:33,242 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 80/100 | Train Loss: 0.0152 | Val rms_score: 0.3720
340
+ 2025-09-26 06:13:41,749 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 81/100 | Train Loss: 0.0163 | Val rms_score: 0.3700
341
+ 2025-09-26 06:13:50,550 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 82/100 | Train Loss: 0.0169 | Val rms_score: 0.3763
342
+ 2025-09-26 06:13:56,511 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 83/100 | Train Loss: 0.0164 | Val rms_score: 0.3744
343
+ 2025-09-26 06:14:04,895 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 84/100 | Train Loss: 0.0156 | Val rms_score: 0.3745
344
+ 2025-09-26 06:14:13,087 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 85/100 | Train Loss: 0.0149 | Val rms_score: 0.3745
345
+ 2025-09-26 06:14:21,475 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 86/100 | Train Loss: 0.0162 | Val rms_score: 0.3749
346
+ 2025-09-26 06:14:27,387 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 87/100 | Train Loss: 0.0155 | Val rms_score: 0.3714
347
+ 2025-09-26 06:14:35,215 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 88/100 | Train Loss: 0.0159 | Val rms_score: 0.3721
348
+ 2025-09-26 06:14:42,582 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 89/100 | Train Loss: 0.0145 | Val rms_score: 0.3710
349
+ 2025-09-26 06:14:50,386 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 90/100 | Train Loss: 0.0162 | Val rms_score: 0.3745
350
+ 2025-09-26 06:14:55,121 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 91/100 | Train Loss: 0.0166 | Val rms_score: 0.3749
351
+ 2025-09-26 06:15:03,133 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 92/100 | Train Loss: 0.0151 | Val rms_score: 0.3732
352
+ 2025-09-26 06:15:11,679 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 93/100 | Train Loss: 0.0151 | Val rms_score: 0.3788
353
+ 2025-09-26 06:15:19,069 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 94/100 | Train Loss: 0.0158 | Val rms_score: 0.3758
354
+ 2025-09-26 06:15:24,252 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 95/100 | Train Loss: 0.0141 | Val rms_score: 0.3722
355
+ 2025-09-26 06:15:31,372 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 96/100 | Train Loss: 0.0144 | Val rms_score: 0.3727
356
+ 2025-09-26 06:15:39,519 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 97/100 | Train Loss: 0.0155 | Val rms_score: 0.3733
357
+ 2025-09-26 06:15:46,890 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 98/100 | Train Loss: 0.0147 | Val rms_score: 0.3748
358
+ 2025-09-26 06:15:52,775 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 99/100 | Train Loss: 0.0146 | Val rms_score: 0.3731
359
+ 2025-09-26 06:16:00,853 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 100/100 | Train Loss: 0.0142 | Val rms_score: 0.3761
360
+ 2025-09-26 06:16:01,453 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Test rms_score: 0.4292
361
+ 2025-09-26 06:16:01,822 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Final Triplicate Test Results — Avg rms_score: 0.4248, Std Dev: 0.0041
logs_modchembert_regression_ModChemBERT-MLM-DAPT-TAFT-OPT/modchembert_deepchem_splits_run_adme_microsom_stab_r_epochs100_batch_size16_20250927_144017.log ADDED
@@ -0,0 +1,325 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ 2025-09-27 14:40:17,784 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Running benchmark for dataset: adme_microsom_stab_r
2
+ 2025-09-27 14:40:17,785 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - dataset: adme_microsom_stab_r, tasks: ['y'], epochs: 100, learning rate: 3e-05, transform: True
3
+ 2025-09-27 14:40:17,788 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Starting triplicate run 1 for dataset adme_microsom_stab_r at 2025-09-27_14-40-17
4
+ 2025-09-27 14:40:26,545 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 1/100 | Train Loss: 0.5799 | Val rms_score: 0.5069
5
+ 2025-09-27 14:40:26,545 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Global step of best model: 136
6
+ 2025-09-27 14:40:27,405 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Best model saved at epoch 1 with val rms_score: 0.5069
7
+ 2025-09-27 14:40:38,023 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 2/100 | Train Loss: 0.4375 | Val rms_score: 0.5272
8
+ 2025-09-27 14:40:48,202 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 3/100 | Train Loss: 0.2676 | Val rms_score: 0.4896
9
+ 2025-09-27 14:40:48,370 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Global step of best model: 408
10
+ 2025-09-27 14:40:48,947 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Best model saved at epoch 3 with val rms_score: 0.4896
11
+ 2025-09-27 14:40:59,712 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 4/100 | Train Loss: 0.2457 | Val rms_score: 0.4880
12
+ 2025-09-27 14:40:59,892 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Global step of best model: 544
13
+ 2025-09-27 14:41:00,484 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Best model saved at epoch 4 with val rms_score: 0.4880
14
+ 2025-09-27 14:41:11,290 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 5/100 | Train Loss: 0.1906 | Val rms_score: 0.4980
15
+ 2025-09-27 14:41:22,155 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 6/100 | Train Loss: 0.1836 | Val rms_score: 0.4872
16
+ 2025-09-27 14:41:22,627 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Global step of best model: 816
17
+ 2025-09-27 14:41:23,347 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Best model saved at epoch 6 with val rms_score: 0.4872
18
+ 2025-09-27 14:41:34,030 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 7/100 | Train Loss: 0.1262 | Val rms_score: 0.4987
19
+ 2025-09-27 14:41:46,140 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 8/100 | Train Loss: 0.0895 | Val rms_score: 0.4901
20
+ 2025-09-27 14:41:56,326 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 9/100 | Train Loss: 0.0788 | Val rms_score: 0.5025
21
+ 2025-09-27 14:42:06,500 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 10/100 | Train Loss: 0.0643 | Val rms_score: 0.5062
22
+ 2025-09-27 14:42:17,159 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 11/100 | Train Loss: 0.0527 | Val rms_score: 0.4935
23
+ 2025-09-27 14:42:27,809 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 12/100 | Train Loss: 0.0518 | Val rms_score: 0.5046
24
+ 2025-09-27 14:42:38,314 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 13/100 | Train Loss: 0.0443 | Val rms_score: 0.5049
25
+ 2025-09-27 14:42:48,927 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 14/100 | Train Loss: 0.0371 | Val rms_score: 0.5081
26
+ 2025-09-27 14:43:00,558 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 15/100 | Train Loss: 0.0393 | Val rms_score: 0.5089
27
+ 2025-09-27 14:43:10,682 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 16/100 | Train Loss: 0.0358 | Val rms_score: 0.5164
28
+ 2025-09-27 14:43:21,198 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 17/100 | Train Loss: 0.0306 | Val rms_score: 0.5135
29
+ 2025-09-27 14:43:31,968 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 18/100 | Train Loss: 0.0332 | Val rms_score: 0.5008
30
+ 2025-09-27 14:43:42,184 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 19/100 | Train Loss: 0.0335 | Val rms_score: 0.5074
31
+ 2025-09-27 14:43:52,641 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 20/100 | Train Loss: 0.0273 | Val rms_score: 0.5049
32
+ 2025-09-27 14:44:03,246 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 21/100 | Train Loss: 0.0321 | Val rms_score: 0.5056
33
+ 2025-09-27 14:44:14,814 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 22/100 | Train Loss: 0.0277 | Val rms_score: 0.5000
34
+ 2025-09-27 14:44:26,516 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 23/100 | Train Loss: 0.0276 | Val rms_score: 0.5051
35
+ 2025-09-27 14:44:37,610 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 24/100 | Train Loss: 0.0253 | Val rms_score: 0.5100
36
+ 2025-09-27 14:44:48,440 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 25/100 | Train Loss: 0.0253 | Val rms_score: 0.5076
37
+ 2025-09-27 14:44:58,507 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 26/100 | Train Loss: 0.0284 | Val rms_score: 0.5051
38
+ 2025-09-27 14:45:09,231 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 27/100 | Train Loss: 0.0244 | Val rms_score: 0.5016
39
+ 2025-09-27 14:45:20,037 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 28/100 | Train Loss: 0.0243 | Val rms_score: 0.5070
40
+ 2025-09-27 14:45:30,452 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 29/100 | Train Loss: 0.0257 | Val rms_score: 0.5058
41
+ 2025-09-27 14:45:41,956 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 30/100 | Train Loss: 0.0244 | Val rms_score: 0.5064
42
+ 2025-09-27 14:45:52,445 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 31/100 | Train Loss: 0.0255 | Val rms_score: 0.5120
43
+ 2025-09-27 14:46:03,347 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 32/100 | Train Loss: 0.0234 | Val rms_score: 0.5121
44
+ 2025-09-27 14:46:13,505 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 33/100 | Train Loss: 0.0202 | Val rms_score: 0.5052
45
+ 2025-09-27 14:46:24,187 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 34/100 | Train Loss: 0.0249 | Val rms_score: 0.5060
46
+ 2025-09-27 14:46:34,505 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 35/100 | Train Loss: 0.0217 | Val rms_score: 0.5028
47
+ 2025-09-27 14:46:44,804 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 36/100 | Train Loss: 0.0215 | Val rms_score: 0.5022
48
+ 2025-09-27 14:46:56,346 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 37/100 | Train Loss: 0.0201 | Val rms_score: 0.4997
49
+ 2025-09-27 14:47:07,209 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 38/100 | Train Loss: 0.0234 | Val rms_score: 0.5123
50
+ 2025-09-27 14:47:17,630 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 39/100 | Train Loss: 0.0133 | Val rms_score: 0.5029
51
+ 2025-09-27 14:47:27,919 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 40/100 | Train Loss: 0.0199 | Val rms_score: 0.5042
52
+ 2025-09-27 14:47:38,383 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 41/100 | Train Loss: 0.0213 | Val rms_score: 0.5065
53
+ 2025-09-27 14:47:49,093 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 42/100 | Train Loss: 0.0265 | Val rms_score: 0.5039
54
+ 2025-09-27 14:47:59,344 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 43/100 | Train Loss: 0.0192 | Val rms_score: 0.5039
55
+ 2025-09-27 14:48:09,779 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 44/100 | Train Loss: 0.0190 | Val rms_score: 0.5070
56
+ 2025-09-27 14:48:21,084 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 45/100 | Train Loss: 0.0199 | Val rms_score: 0.5059
57
+ 2025-09-27 14:48:31,239 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 46/100 | Train Loss: 0.0171 | Val rms_score: 0.5074
58
+ 2025-09-27 14:48:41,979 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 47/100 | Train Loss: 0.0195 | Val rms_score: 0.5082
59
+ 2025-09-27 14:48:52,433 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 48/100 | Train Loss: 0.0206 | Val rms_score: 0.5037
60
+ 2025-09-27 14:49:03,016 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 49/100 | Train Loss: 0.0209 | Val rms_score: 0.5028
61
+ 2025-09-27 14:49:13,891 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 50/100 | Train Loss: 0.0187 | Val rms_score: 0.5037
62
+ 2025-09-27 14:49:25,348 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 51/100 | Train Loss: 0.0188 | Val rms_score: 0.5080
63
+ 2025-09-27 14:49:37,483 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 52/100 | Train Loss: 0.0173 | Val rms_score: 0.5070
64
+ 2025-09-27 14:49:47,770 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 53/100 | Train Loss: 0.0208 | Val rms_score: 0.5049
65
+ 2025-09-27 14:49:58,197 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 54/100 | Train Loss: 0.0187 | Val rms_score: 0.5059
66
+ 2025-09-27 14:50:08,564 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 55/100 | Train Loss: 0.0180 | Val rms_score: 0.5019
67
+ 2025-09-27 14:50:18,676 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 56/100 | Train Loss: 0.0203 | Val rms_score: 0.5012
68
+ 2025-09-27 14:50:29,331 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 57/100 | Train Loss: 0.0184 | Val rms_score: 0.5102
69
+ 2025-09-27 14:50:40,133 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 58/100 | Train Loss: 0.0185 | Val rms_score: 0.5040
70
+ 2025-09-27 14:50:51,478 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 59/100 | Train Loss: 0.0176 | Val rms_score: 0.5074
71
+ 2025-09-27 14:51:01,634 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 60/100 | Train Loss: 0.0180 | Val rms_score: 0.5020
72
+ 2025-09-27 14:51:12,635 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 61/100 | Train Loss: 0.0186 | Val rms_score: 0.5012
73
+ 2025-09-27 14:51:23,105 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 62/100 | Train Loss: 0.0179 | Val rms_score: 0.5043
74
+ 2025-09-27 14:51:33,553 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 63/100 | Train Loss: 0.0170 | Val rms_score: 0.5020
75
+ 2025-09-27 14:51:43,863 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 64/100 | Train Loss: 0.0115 | Val rms_score: 0.5044
76
+ 2025-09-27 14:51:54,642 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 65/100 | Train Loss: 0.0177 | Val rms_score: 0.5071
77
+ 2025-09-27 14:52:04,743 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 66/100 | Train Loss: 0.0157 | Val rms_score: 0.5071
78
+ 2025-09-27 14:52:16,452 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 67/100 | Train Loss: 0.0186 | Val rms_score: 0.5047
79
+ 2025-09-27 14:52:27,091 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 68/100 | Train Loss: 0.0195 | Val rms_score: 0.5087
80
+ 2025-09-27 14:52:37,305 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 69/100 | Train Loss: 0.0161 | Val rms_score: 0.5006
81
+ 2025-09-27 14:52:48,035 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 70/100 | Train Loss: 0.0150 | Val rms_score: 0.4984
82
+ 2025-09-27 14:52:58,615 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 71/100 | Train Loss: 0.0160 | Val rms_score: 0.5010
83
+ 2025-09-27 14:53:09,316 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 72/100 | Train Loss: 0.0162 | Val rms_score: 0.5034
84
+ 2025-09-27 14:53:19,658 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 73/100 | Train Loss: 0.0167 | Val rms_score: 0.5011
85
+ 2025-09-27 14:53:30,988 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 74/100 | Train Loss: 0.0167 | Val rms_score: 0.5022
86
+ 2025-09-27 14:53:42,707 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 75/100 | Train Loss: 0.0173 | Val rms_score: 0.5010
87
+ 2025-09-27 14:53:53,135 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 76/100 | Train Loss: 0.0158 | Val rms_score: 0.5023
88
+ 2025-09-27 14:54:04,202 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 77/100 | Train Loss: 0.0177 | Val rms_score: 0.5014
89
+ 2025-09-27 14:54:15,338 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 78/100 | Train Loss: 0.0145 | Val rms_score: 0.4948
90
+ 2025-09-27 14:54:25,608 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 79/100 | Train Loss: 0.0142 | Val rms_score: 0.5021
91
+ 2025-09-27 14:54:36,192 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 80/100 | Train Loss: 0.0159 | Val rms_score: 0.5004
92
+ 2025-09-27 14:54:47,471 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 81/100 | Train Loss: 0.0166 | Val rms_score: 0.5042
93
+ 2025-09-27 14:54:59,060 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 82/100 | Train Loss: 0.0138 | Val rms_score: 0.4993
94
+ 2025-09-27 14:55:09,386 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 83/100 | Train Loss: 0.0156 | Val rms_score: 0.4967
95
+ 2025-09-27 14:55:20,257 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 84/100 | Train Loss: 0.0159 | Val rms_score: 0.5031
96
+ 2025-09-27 14:55:31,090 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 85/100 | Train Loss: 0.0166 | Val rms_score: 0.5028
97
+ 2025-09-27 14:55:41,364 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 86/100 | Train Loss: 0.0160 | Val rms_score: 0.5045
98
+ 2025-09-27 14:55:52,248 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 87/100 | Train Loss: 0.0155 | Val rms_score: 0.5054
99
+ 2025-09-27 14:56:03,256 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 88/100 | Train Loss: 0.0160 | Val rms_score: 0.5024
100
+ 2025-09-27 14:56:14,552 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 89/100 | Train Loss: 0.0127 | Val rms_score: 0.5004
101
+ 2025-09-27 14:56:24,897 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 90/100 | Train Loss: 0.0144 | Val rms_score: 0.5068
102
+ 2025-09-27 14:56:35,344 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 91/100 | Train Loss: 0.0152 | Val rms_score: 0.5036
103
+ 2025-09-27 14:56:46,655 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 92/100 | Train Loss: 0.0152 | Val rms_score: 0.5087
104
+ 2025-09-27 14:56:57,114 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 93/100 | Train Loss: 0.0170 | Val rms_score: 0.5054
105
+ 2025-09-27 14:57:07,333 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 94/100 | Train Loss: 0.0163 | Val rms_score: 0.5013
106
+ 2025-09-27 14:57:18,133 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 95/100 | Train Loss: 0.0164 | Val rms_score: 0.5019
107
+ 2025-09-27 14:57:30,221 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 96/100 | Train Loss: 0.0163 | Val rms_score: 0.5026
108
+ 2025-09-27 14:57:41,423 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 97/100 | Train Loss: 0.0152 | Val rms_score: 0.5068
109
+ 2025-09-27 14:57:52,093 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 98/100 | Train Loss: 0.0158 | Val rms_score: 0.5048
110
+ 2025-09-27 14:58:04,621 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 99/100 | Train Loss: 0.0151 | Val rms_score: 0.4988
111
+ 2025-09-27 14:58:15,189 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 100/100 | Train Loss: 0.0159 | Val rms_score: 0.5076
112
+ 2025-09-27 14:58:16,068 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Test rms_score: 0.4446
113
+ 2025-09-27 14:58:16,414 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Starting triplicate run 2 for dataset adme_microsom_stab_r at 2025-09-27_14-58-16
114
+ 2025-09-27 14:58:26,365 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 1/100 | Train Loss: 0.5764 | Val rms_score: 0.5391
115
+ 2025-09-27 14:58:26,365 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Global step of best model: 136
116
+ 2025-09-27 14:58:28,517 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Best model saved at epoch 1 with val rms_score: 0.5391
117
+ 2025-09-27 14:58:41,403 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 2/100 | Train Loss: 0.4757 | Val rms_score: 0.4934
118
+ 2025-09-27 14:58:41,587 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Global step of best model: 272
119
+ 2025-09-27 14:58:42,177 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Best model saved at epoch 2 with val rms_score: 0.4934
120
+ 2025-09-27 14:58:53,210 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 3/100 | Train Loss: 0.2598 | Val rms_score: 0.4992
121
+ 2025-09-27 14:59:03,783 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 4/100 | Train Loss: 0.2798 | Val rms_score: 0.5087
122
+ 2025-09-27 14:59:13,993 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 5/100 | Train Loss: 0.1969 | Val rms_score: 0.5179
123
+ 2025-09-27 14:59:23,967 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 6/100 | Train Loss: 0.1582 | Val rms_score: 0.5047
124
+ 2025-09-27 14:59:34,864 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 7/100 | Train Loss: 0.1136 | Val rms_score: 0.4994
125
+ 2025-09-27 14:59:46,664 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 8/100 | Train Loss: 0.0909 | Val rms_score: 0.5128
126
+ 2025-09-27 14:59:57,031 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 9/100 | Train Loss: 0.0970 | Val rms_score: 0.5094
127
+ 2025-09-27 15:00:07,639 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 10/100 | Train Loss: 0.0651 | Val rms_score: 0.5128
128
+ 2025-09-27 15:00:18,115 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 11/100 | Train Loss: 0.0557 | Val rms_score: 0.5076
129
+ 2025-09-27 15:00:28,841 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 12/100 | Train Loss: 0.0540 | Val rms_score: 0.5104
130
+ 2025-09-27 15:00:38,893 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 13/100 | Train Loss: 0.0501 | Val rms_score: 0.5188
131
+ 2025-09-27 15:00:49,680 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 14/100 | Train Loss: 0.0620 | Val rms_score: 0.5086
132
+ 2025-09-27 15:01:01,135 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 15/100 | Train Loss: 0.0395 | Val rms_score: 0.5067
133
+ 2025-09-27 15:01:11,468 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 16/100 | Train Loss: 0.0370 | Val rms_score: 0.5072
134
+ 2025-09-27 15:01:22,519 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 17/100 | Train Loss: 0.0443 | Val rms_score: 0.5152
135
+ 2025-09-27 15:01:33,283 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 18/100 | Train Loss: 0.0350 | Val rms_score: 0.5091
136
+ 2025-09-27 15:01:43,775 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 19/100 | Train Loss: 0.0337 | Val rms_score: 0.5098
137
+ 2025-09-27 15:01:54,424 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 20/100 | Train Loss: 0.0375 | Val rms_score: 0.5201
138
+ 2025-09-27 15:02:04,664 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 21/100 | Train Loss: 0.0325 | Val rms_score: 0.5117
139
+ 2025-09-27 15:02:15,898 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 22/100 | Train Loss: 0.0301 | Val rms_score: 0.5123
140
+ 2025-09-27 15:02:27,722 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 23/100 | Train Loss: 0.0317 | Val rms_score: 0.5103
141
+ 2025-09-27 15:02:38,051 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 24/100 | Train Loss: 0.0270 | Val rms_score: 0.5111
142
+ 2025-09-27 15:02:48,698 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 25/100 | Train Loss: 0.0288 | Val rms_score: 0.5055
143
+ 2025-09-27 15:02:59,376 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 26/100 | Train Loss: 0.0234 | Val rms_score: 0.5148
144
+ 2025-09-27 15:03:10,936 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 27/100 | Train Loss: 0.0259 | Val rms_score: 0.5138
145
+ 2025-09-27 15:03:21,603 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 28/100 | Train Loss: 0.0276 | Val rms_score: 0.5091
146
+ 2025-09-27 15:03:32,272 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 29/100 | Train Loss: 0.0266 | Val rms_score: 0.5141
147
+ 2025-09-27 15:03:43,646 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 30/100 | Train Loss: 0.0279 | Val rms_score: 0.5099
148
+ 2025-09-27 15:03:54,452 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 31/100 | Train Loss: 0.0273 | Val rms_score: 0.5138
149
+ 2025-09-27 15:04:05,727 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 32/100 | Train Loss: 0.0227 | Val rms_score: 0.5071
150
+ 2025-09-27 15:04:15,854 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 33/100 | Train Loss: 0.0252 | Val rms_score: 0.5117
151
+ 2025-09-27 15:04:26,571 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 34/100 | Train Loss: 0.0243 | Val rms_score: 0.5091
152
+ 2025-09-27 15:04:37,113 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 35/100 | Train Loss: 0.0228 | Val rms_score: 0.5017
153
+ 2025-09-27 15:04:48,190 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 36/100 | Train Loss: 0.0216 | Val rms_score: 0.5079
154
+ 2025-09-27 15:05:00,261 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 37/100 | Train Loss: 0.0236 | Val rms_score: 0.5112
155
+ 2025-09-27 15:05:10,382 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 38/100 | Train Loss: 0.0227 | Val rms_score: 0.5199
156
+ 2025-09-27 15:05:21,299 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 39/100 | Train Loss: 0.0310 | Val rms_score: 0.5088
157
+ 2025-09-27 15:05:31,730 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 40/100 | Train Loss: 0.0200 | Val rms_score: 0.5077
158
+ 2025-09-27 15:05:42,090 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 41/100 | Train Loss: 0.0205 | Val rms_score: 0.5082
159
+ 2025-09-27 15:05:53,303 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 42/100 | Train Loss: 0.0203 | Val rms_score: 0.5109
160
+ 2025-09-27 15:06:03,841 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 43/100 | Train Loss: 0.0210 | Val rms_score: 0.5099
161
+ 2025-09-27 15:06:14,134 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 44/100 | Train Loss: 0.0206 | Val rms_score: 0.5066
162
+ 2025-09-27 15:06:25,331 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 45/100 | Train Loss: 0.0214 | Val rms_score: 0.5184
163
+ 2025-09-27 15:06:36,162 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 46/100 | Train Loss: 0.0197 | Val rms_score: 0.5055
164
+ 2025-09-27 15:06:46,734 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 47/100 | Train Loss: 0.0182 | Val rms_score: 0.5060
165
+ 2025-09-27 15:06:57,124 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 48/100 | Train Loss: 0.0172 | Val rms_score: 0.5090
166
+ 2025-09-27 15:07:07,889 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 49/100 | Train Loss: 0.0179 | Val rms_score: 0.5080
167
+ 2025-09-27 15:07:18,284 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 50/100 | Train Loss: 0.0183 | Val rms_score: 0.5116
168
+ 2025-09-27 15:07:28,937 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 51/100 | Train Loss: 0.0207 | Val rms_score: 0.5095
169
+ 2025-09-27 15:07:40,473 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 52/100 | Train Loss: 0.0192 | Val rms_score: 0.5079
170
+ 2025-09-27 15:07:51,427 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 53/100 | Train Loss: 0.0142 | Val rms_score: 0.5132
171
+ 2025-09-27 15:08:01,827 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 54/100 | Train Loss: 0.0191 | Val rms_score: 0.5128
172
+ 2025-09-27 15:08:12,383 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 55/100 | Train Loss: 0.0182 | Val rms_score: 0.5042
173
+ 2025-09-27 15:08:23,309 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 56/100 | Train Loss: 0.0150 | Val rms_score: 0.5087
174
+ 2025-09-27 15:08:34,173 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 57/100 | Train Loss: 0.0177 | Val rms_score: 0.5082
175
+ 2025-09-27 15:08:44,348 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 58/100 | Train Loss: 0.0193 | Val rms_score: 0.5045
176
+ 2025-09-27 15:08:55,487 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 59/100 | Train Loss: 0.0192 | Val rms_score: 0.5093
177
+ 2025-09-27 15:09:06,926 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 60/100 | Train Loss: 0.0174 | Val rms_score: 0.5054
178
+ 2025-09-27 15:09:17,610 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 61/100 | Train Loss: 0.0191 | Val rms_score: 0.5077
179
+ 2025-09-27 15:09:28,588 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 62/100 | Train Loss: 0.0200 | Val rms_score: 0.5097
180
+ 2025-09-27 15:09:39,322 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 63/100 | Train Loss: 0.0194 | Val rms_score: 0.5115
181
+ 2025-09-27 15:09:49,619 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 64/100 | Train Loss: 0.0187 | Val rms_score: 0.5041
182
+ 2025-09-27 15:10:00,079 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 65/100 | Train Loss: 0.0159 | Val rms_score: 0.5118
183
+ 2025-09-27 15:10:10,026 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 66/100 | Train Loss: 0.0181 | Val rms_score: 0.5103
184
+ 2025-09-27 15:10:22,213 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 67/100 | Train Loss: 0.0133 | Val rms_score: 0.5090
185
+ 2025-09-27 15:10:32,547 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 68/100 | Train Loss: 0.0199 | Val rms_score: 0.5064
186
+ 2025-09-27 15:10:42,714 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 69/100 | Train Loss: 0.0176 | Val rms_score: 0.5056
187
+ 2025-09-27 15:10:53,638 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 70/100 | Train Loss: 0.0187 | Val rms_score: 0.5041
188
+ 2025-09-27 15:11:03,913 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 71/100 | Train Loss: 0.0174 | Val rms_score: 0.5027
189
+ 2025-09-27 15:11:14,752 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 72/100 | Train Loss: 0.0178 | Val rms_score: 0.5075
190
+ 2025-09-27 15:11:25,067 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 73/100 | Train Loss: 0.0167 | Val rms_score: 0.5067
191
+ 2025-09-27 15:11:36,697 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 74/100 | Train Loss: 0.0157 | Val rms_score: 0.5052
192
+ 2025-09-27 15:11:47,014 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 75/100 | Train Loss: 0.0163 | Val rms_score: 0.5060
193
+ 2025-09-27 15:11:57,314 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 76/100 | Train Loss: 0.0162 | Val rms_score: 0.5050
194
+ 2025-09-27 15:12:08,260 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 77/100 | Train Loss: 0.0158 | Val rms_score: 0.5036
195
+ 2025-09-27 15:12:18,749 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 78/100 | Train Loss: 0.0190 | Val rms_score: 0.5065
196
+ 2025-09-27 15:12:29,005 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 79/100 | Train Loss: 0.0168 | Val rms_score: 0.5058
197
+ 2025-09-27 15:12:39,039 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 80/100 | Train Loss: 0.0163 | Val rms_score: 0.5030
198
+ 2025-09-27 15:12:50,811 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 81/100 | Train Loss: 0.0181 | Val rms_score: 0.5047
199
+ 2025-09-27 15:13:01,840 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 82/100 | Train Loss: 0.0162 | Val rms_score: 0.5015
200
+ 2025-09-27 15:13:12,181 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 83/100 | Train Loss: 0.0155 | Val rms_score: 0.5097
201
+ 2025-09-27 15:13:22,422 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 84/100 | Train Loss: 0.0173 | Val rms_score: 0.5036
202
+ 2025-09-27 15:13:33,356 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 85/100 | Train Loss: 0.0160 | Val rms_score: 0.5066
203
+ 2025-09-27 15:13:43,588 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 86/100 | Train Loss: 0.0157 | Val rms_score: 0.5119
204
+ 2025-09-27 15:13:54,289 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 87/100 | Train Loss: 0.0167 | Val rms_score: 0.5041
205
+ 2025-09-27 15:14:04,993 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 88/100 | Train Loss: 0.0163 | Val rms_score: 0.5093
206
+ 2025-09-27 15:14:16,006 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 89/100 | Train Loss: 0.0151 | Val rms_score: 0.5042
207
+ 2025-09-27 15:14:26,084 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 90/100 | Train Loss: 0.0171 | Val rms_score: 0.5080
208
+ 2025-09-27 15:14:36,646 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 91/100 | Train Loss: 0.0151 | Val rms_score: 0.5042
209
+ 2025-09-27 15:14:47,684 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 92/100 | Train Loss: 0.0190 | Val rms_score: 0.5120
210
+ 2025-09-27 15:14:58,461 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 93/100 | Train Loss: 0.0161 | Val rms_score: 0.5071
211
+ 2025-09-27 15:15:08,917 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 94/100 | Train Loss: 0.0162 | Val rms_score: 0.5059
212
+ 2025-09-27 15:15:19,136 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 95/100 | Train Loss: 0.0146 | Val rms_score: 0.5026
213
+ 2025-09-27 15:15:30,824 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 96/100 | Train Loss: 0.0158 | Val rms_score: 0.5068
214
+ 2025-09-27 15:15:41,443 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 97/100 | Train Loss: 0.0149 | Val rms_score: 0.5068
215
+ 2025-09-27 15:15:51,628 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 98/100 | Train Loss: 0.0141 | Val rms_score: 0.5031
216
+ 2025-09-27 15:16:02,337 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 99/100 | Train Loss: 0.0149 | Val rms_score: 0.5070
217
+ 2025-09-27 15:16:12,654 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 100/100 | Train Loss: 0.0152 | Val rms_score: 0.5059
218
+ 2025-09-27 15:16:13,395 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Test rms_score: 0.4424
219
+ 2025-09-27 15:16:13,778 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Starting triplicate run 3 for dataset adme_microsom_stab_r at 2025-09-27_15-16-13
220
+ 2025-09-27 15:16:22,315 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 1/100 | Train Loss: 0.5417 | Val rms_score: 0.5372
221
+ 2025-09-27 15:16:22,315 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Global step of best model: 136
222
+ 2025-09-27 15:16:23,185 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Best model saved at epoch 1 with val rms_score: 0.5372
223
+ 2025-09-27 15:16:33,590 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 2/100 | Train Loss: 0.4358 | Val rms_score: 0.4863
224
+ 2025-09-27 15:16:33,767 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Global step of best model: 272
225
+ 2025-09-27 15:16:34,744 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Best model saved at epoch 2 with val rms_score: 0.4863
226
+ 2025-09-27 15:16:45,981 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 3/100 | Train Loss: 0.2910 | Val rms_score: 0.4871
227
+ 2025-09-27 15:16:56,634 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 4/100 | Train Loss: 0.2486 | Val rms_score: 0.5244
228
+ 2025-09-27 15:17:07,013 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 5/100 | Train Loss: 0.1773 | Val rms_score: 0.5049
229
+ 2025-09-27 15:17:17,281 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 6/100 | Train Loss: 0.1777 | Val rms_score: 0.5009
230
+ 2025-09-27 15:17:28,914 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 7/100 | Train Loss: 0.1124 | Val rms_score: 0.5011
231
+ 2025-09-27 15:17:40,294 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 8/100 | Train Loss: 0.0906 | Val rms_score: 0.5152
232
+ 2025-09-27 15:17:50,595 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 9/100 | Train Loss: 0.0846 | Val rms_score: 0.5263
233
+ 2025-09-27 15:18:01,885 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 10/100 | Train Loss: 0.0617 | Val rms_score: 0.5213
234
+ 2025-09-27 15:18:12,369 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 11/100 | Train Loss: 0.0531 | Val rms_score: 0.5023
235
+ 2025-09-27 15:18:23,250 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 12/100 | Train Loss: 0.0508 | Val rms_score: 0.5045
236
+ 2025-09-27 15:18:33,858 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 13/100 | Train Loss: 0.0492 | Val rms_score: 0.5048
237
+ 2025-09-27 15:18:44,905 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 14/100 | Train Loss: 0.0361 | Val rms_score: 0.5144
238
+ 2025-09-27 15:18:56,339 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 15/100 | Train Loss: 0.0398 | Val rms_score: 0.5188
239
+ 2025-09-27 15:19:06,747 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 16/100 | Train Loss: 0.0428 | Val rms_score: 0.5137
240
+ 2025-09-27 15:19:17,831 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 17/100 | Train Loss: 0.0400 | Val rms_score: 0.5047
241
+ 2025-09-27 15:19:28,086 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 18/100 | Train Loss: 0.0382 | Val rms_score: 0.5110
242
+ 2025-09-27 15:19:38,364 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 19/100 | Train Loss: 0.0331 | Val rms_score: 0.5112
243
+ 2025-09-27 15:19:49,105 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 20/100 | Train Loss: 0.0301 | Val rms_score: 0.5157
244
+ 2025-09-27 15:19:59,701 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 21/100 | Train Loss: 0.0290 | Val rms_score: 0.5118
245
+ 2025-09-27 15:20:10,741 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 22/100 | Train Loss: 0.0289 | Val rms_score: 0.5106
246
+ 2025-09-27 15:20:21,976 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 23/100 | Train Loss: 0.0289 | Val rms_score: 0.5084
247
+ 2025-09-27 15:20:32,521 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 24/100 | Train Loss: 0.0289 | Val rms_score: 0.5071
248
+ 2025-09-27 15:20:43,212 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 25/100 | Train Loss: 0.0298 | Val rms_score: 0.5048
249
+ 2025-09-27 15:20:53,735 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 26/100 | Train Loss: 0.0280 | Val rms_score: 0.5055
250
+ 2025-09-27 15:21:04,601 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 27/100 | Train Loss: 0.0275 | Val rms_score: 0.5057
251
+ 2025-09-27 15:21:15,893 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 28/100 | Train Loss: 0.0195 | Val rms_score: 0.5079
252
+ 2025-09-27 15:21:25,894 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 29/100 | Train Loss: 0.0273 | Val rms_score: 0.5086
253
+ 2025-09-27 15:21:37,219 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 30/100 | Train Loss: 0.0256 | Val rms_score: 0.5104
254
+ 2025-09-27 15:21:47,423 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 31/100 | Train Loss: 0.0243 | Val rms_score: 0.5068
255
+ 2025-09-27 15:21:58,640 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 32/100 | Train Loss: 0.0227 | Val rms_score: 0.5082
256
+ 2025-09-27 15:22:09,185 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 33/100 | Train Loss: 0.0236 | Val rms_score: 0.5108
257
+ 2025-09-27 15:22:19,125 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 34/100 | Train Loss: 0.0220 | Val rms_score: 0.5078
258
+ 2025-09-27 15:22:29,744 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 35/100 | Train Loss: 0.0225 | Val rms_score: 0.5109
259
+ 2025-09-27 15:22:39,998 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 36/100 | Train Loss: 0.0226 | Val rms_score: 0.5111
260
+ 2025-09-27 15:22:51,870 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 37/100 | Train Loss: 0.0225 | Val rms_score: 0.5081
261
+ 2025-09-27 15:23:02,096 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 38/100 | Train Loss: 0.0206 | Val rms_score: 0.5052
262
+ 2025-09-27 15:23:13,273 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 39/100 | Train Loss: 0.0305 | Val rms_score: 0.5079
263
+ 2025-09-27 15:23:23,628 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 40/100 | Train Loss: 0.0207 | Val rms_score: 0.5121
264
+ 2025-09-27 15:23:34,016 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 41/100 | Train Loss: 0.0215 | Val rms_score: 0.5114
265
+ 2025-09-27 15:23:44,798 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 42/100 | Train Loss: 0.0234 | Val rms_score: 0.5081
266
+ 2025-09-27 15:23:55,474 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 43/100 | Train Loss: 0.0196 | Val rms_score: 0.5094
267
+ 2025-09-27 15:24:05,790 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 44/100 | Train Loss: 0.0201 | Val rms_score: 0.5088
268
+ 2025-09-27 15:24:16,933 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 45/100 | Train Loss: 0.0232 | Val rms_score: 0.5091
269
+ 2025-09-27 15:24:27,339 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 46/100 | Train Loss: 0.0190 | Val rms_score: 0.5111
270
+ 2025-09-27 15:24:38,342 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 47/100 | Train Loss: 0.0209 | Val rms_score: 0.5076
271
+ 2025-09-27 15:24:49,361 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 48/100 | Train Loss: 0.0199 | Val rms_score: 0.5151
272
+ 2025-09-27 15:24:59,730 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 49/100 | Train Loss: 0.0198 | Val rms_score: 0.5072
273
+ 2025-09-27 15:25:10,375 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 50/100 | Train Loss: 0.0190 | Val rms_score: 0.5083
274
+ 2025-09-27 15:25:20,649 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 51/100 | Train Loss: 0.0190 | Val rms_score: 0.5120
275
+ 2025-09-27 15:25:32,231 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 52/100 | Train Loss: 0.0214 | Val rms_score: 0.5133
276
+ 2025-09-27 15:25:42,786 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 53/100 | Train Loss: 0.0197 | Val rms_score: 0.5076
277
+ 2025-09-27 15:25:53,794 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 54/100 | Train Loss: 0.0185 | Val rms_score: 0.5066
278
+ 2025-09-27 15:26:04,313 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 55/100 | Train Loss: 0.0196 | Val rms_score: 0.5075
279
+ 2025-09-27 15:26:14,717 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 56/100 | Train Loss: 0.0189 | Val rms_score: 0.5041
280
+ 2025-09-27 15:26:25,423 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 57/100 | Train Loss: 0.0161 | Val rms_score: 0.5076
281
+ 2025-09-27 15:26:36,429 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 58/100 | Train Loss: 0.0179 | Val rms_score: 0.5092
282
+ 2025-09-27 15:26:47,595 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 59/100 | Train Loss: 0.0195 | Val rms_score: 0.5040
283
+ 2025-09-27 15:26:57,977 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 60/100 | Train Loss: 0.0181 | Val rms_score: 0.5030
284
+ 2025-09-27 15:27:08,824 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 61/100 | Train Loss: 0.0178 | Val rms_score: 0.5039
285
+ 2025-09-27 15:27:19,720 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 62/100 | Train Loss: 0.0194 | Val rms_score: 0.5047
286
+ 2025-09-27 15:27:29,955 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 63/100 | Train Loss: 0.0178 | Val rms_score: 0.5061
287
+ 2025-09-27 15:27:40,526 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 64/100 | Train Loss: 0.0167 | Val rms_score: 0.4983
288
+ 2025-09-27 15:27:51,091 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 65/100 | Train Loss: 0.0187 | Val rms_score: 0.5034
289
+ 2025-09-27 15:28:01,546 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 66/100 | Train Loss: 0.0182 | Val rms_score: 0.5045
290
+ 2025-09-27 15:28:13,444 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 67/100 | Train Loss: 0.0149 | Val rms_score: 0.5045
291
+ 2025-09-27 15:28:23,924 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 68/100 | Train Loss: 0.0169 | Val rms_score: 0.5022
292
+ 2025-09-27 15:28:35,822 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 69/100 | Train Loss: 0.0176 | Val rms_score: 0.5046
293
+ 2025-09-27 15:28:46,034 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 70/100 | Train Loss: 0.0181 | Val rms_score: 0.5051
294
+ 2025-09-27 15:28:56,425 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 71/100 | Train Loss: 0.0184 | Val rms_score: 0.5062
295
+ 2025-09-27 15:29:07,577 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 72/100 | Train Loss: 0.0177 | Val rms_score: 0.5069
296
+ 2025-09-27 15:29:18,497 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 73/100 | Train Loss: 0.0148 | Val rms_score: 0.5053
297
+ 2025-09-27 15:29:29,363 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 74/100 | Train Loss: 0.0170 | Val rms_score: 0.5065
298
+ 2025-09-27 15:29:39,560 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 75/100 | Train Loss: 0.0173 | Val rms_score: 0.5053
299
+ 2025-09-27 15:29:49,646 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 76/100 | Train Loss: 0.0169 | Val rms_score: 0.5053
300
+ 2025-09-27 15:30:00,346 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 77/100 | Train Loss: 0.0174 | Val rms_score: 0.5041
301
+ 2025-09-27 15:30:10,278 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 78/100 | Train Loss: 0.0168 | Val rms_score: 0.5004
302
+ 2025-09-27 15:30:20,175 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 79/100 | Train Loss: 0.0171 | Val rms_score: 0.5105
303
+ 2025-09-27 15:30:31,123 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 80/100 | Train Loss: 0.0169 | Val rms_score: 0.5005
304
+ 2025-09-27 15:30:42,308 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 81/100 | Train Loss: 0.0167 | Val rms_score: 0.5028
305
+ 2025-09-27 15:30:52,595 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 82/100 | Train Loss: 0.0163 | Val rms_score: 0.5006
306
+ 2025-09-27 15:31:03,142 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 83/100 | Train Loss: 0.0153 | Val rms_score: 0.5044
307
+ 2025-09-27 15:31:13,821 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 84/100 | Train Loss: 0.0171 | Val rms_score: 0.5034
308
+ 2025-09-27 15:31:23,768 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 85/100 | Train Loss: 0.0174 | Val rms_score: 0.5101
309
+ 2025-09-27 15:31:33,890 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 86/100 | Train Loss: 0.0160 | Val rms_score: 0.5047
310
+ 2025-09-27 15:31:44,409 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 87/100 | Train Loss: 0.0143 | Val rms_score: 0.5016
311
+ 2025-09-27 15:31:55,247 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 88/100 | Train Loss: 0.0170 | Val rms_score: 0.5042
312
+ 2025-09-27 15:32:06,354 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 89/100 | Train Loss: 0.0099 | Val rms_score: 0.5101
313
+ 2025-09-27 15:32:17,833 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 90/100 | Train Loss: 0.0157 | Val rms_score: 0.4988
314
+ 2025-09-27 15:32:28,928 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 91/100 | Train Loss: 0.0158 | Val rms_score: 0.5067
315
+ 2025-09-27 15:32:41,856 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 92/100 | Train Loss: 0.0155 | Val rms_score: 0.5048
316
+ 2025-09-27 15:32:51,935 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 93/100 | Train Loss: 0.0156 | Val rms_score: 0.5035
317
+ 2025-09-27 15:33:02,094 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 94/100 | Train Loss: 0.0168 | Val rms_score: 0.5021
318
+ 2025-09-27 15:33:12,728 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 95/100 | Train Loss: 0.0176 | Val rms_score: 0.5048
319
+ 2025-09-27 15:33:23,531 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 96/100 | Train Loss: 0.0152 | Val rms_score: 0.5050
320
+ 2025-09-27 15:33:33,897 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 97/100 | Train Loss: 0.0166 | Val rms_score: 0.5043
321
+ 2025-09-27 15:33:43,991 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 98/100 | Train Loss: 0.0159 | Val rms_score: 0.5032
322
+ 2025-09-27 15:33:54,719 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 99/100 | Train Loss: 0.0150 | Val rms_score: 0.5039
323
+ 2025-09-27 15:34:04,638 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Epoch 100/100 | Train Loss: 0.0145 | Val rms_score: 0.5034
324
+ 2025-09-27 15:34:05,595 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Test rms_score: 0.4340
325
+ 2025-09-27 15:34:06,032 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size16 - INFO - Final Triplicate Test Results — Avg rms_score: 0.4403, Std Dev: 0.0046
logs_modchembert_regression_ModChemBERT-MLM-DAPT-TAFT-OPT/modchembert_deepchem_splits_run_adme_permeability_epochs100_batch_size8_20250927_085030.log ADDED
@@ -0,0 +1,379 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ 2025-09-27 08:50:30,615 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Running benchmark for dataset: adme_permeability
2
+ 2025-09-27 08:50:30,615 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - dataset: adme_permeability, tasks: ['y'], epochs: 100, learning rate: 3e-05, transform: True
3
+ 2025-09-27 08:50:30,620 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Starting triplicate run 1 for dataset adme_permeability at 2025-09-27_08-50-30
4
+ 2025-09-27 08:50:45,644 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 1/100 | Train Loss: 0.3865 | Val rms_score: 0.4062
5
+ 2025-09-27 08:50:45,644 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Global step of best model: 265
6
+ 2025-09-27 08:50:46,454 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Best model saved at epoch 1 with val rms_score: 0.4062
7
+ 2025-09-27 08:51:05,069 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 2/100 | Train Loss: 0.3042 | Val rms_score: 0.4032
8
+ 2025-09-27 08:51:05,226 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Global step of best model: 530
9
+ 2025-09-27 08:51:05,831 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Best model saved at epoch 2 with val rms_score: 0.4032
10
+ 2025-09-27 08:51:26,629 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 3/100 | Train Loss: 0.2158 | Val rms_score: 0.3799
11
+ 2025-09-27 08:51:26,825 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Global step of best model: 795
12
+ 2025-09-27 08:51:27,418 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Best model saved at epoch 3 with val rms_score: 0.3799
13
+ 2025-09-27 08:51:50,549 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 4/100 | Train Loss: 0.1656 | Val rms_score: 0.3702
14
+ 2025-09-27 08:51:50,741 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Global step of best model: 1060
15
+ 2025-09-27 08:51:51,350 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Best model saved at epoch 4 with val rms_score: 0.3702
16
+ 2025-09-27 08:52:10,410 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 5/100 | Train Loss: 0.0994 | Val rms_score: 0.3739
17
+ 2025-09-27 08:52:30,059 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 6/100 | Train Loss: 0.0854 | Val rms_score: 0.3672
18
+ 2025-09-27 08:52:30,555 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Global step of best model: 1590
19
+ 2025-09-27 08:52:31,146 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Best model saved at epoch 6 with val rms_score: 0.3672
20
+ 2025-09-27 08:52:51,895 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 7/100 | Train Loss: 0.0818 | Val rms_score: 0.3973
21
+ 2025-09-27 08:53:16,234 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 8/100 | Train Loss: 0.0668 | Val rms_score: 0.3693
22
+ 2025-09-27 08:53:36,481 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 9/100 | Train Loss: 0.0544 | Val rms_score: 0.3627
23
+ 2025-09-27 08:53:36,642 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Global step of best model: 2385
24
+ 2025-09-27 08:53:37,235 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Best model saved at epoch 9 with val rms_score: 0.3627
25
+ 2025-09-27 08:53:57,757 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 10/100 | Train Loss: 0.0491 | Val rms_score: 0.3608
26
+ 2025-09-27 08:53:57,972 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Global step of best model: 2650
27
+ 2025-09-27 08:53:58,620 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Best model saved at epoch 10 with val rms_score: 0.3608
28
+ 2025-09-27 08:54:18,901 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 11/100 | Train Loss: 0.0643 | Val rms_score: 0.3729
29
+ 2025-09-27 08:54:40,599 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 12/100 | Train Loss: 0.0422 | Val rms_score: 0.3640
30
+ 2025-09-27 08:55:02,910 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 13/100 | Train Loss: 0.0418 | Val rms_score: 0.3568
31
+ 2025-09-27 08:55:03,071 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Global step of best model: 3445
32
+ 2025-09-27 08:55:03,661 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Best model saved at epoch 13 with val rms_score: 0.3568
33
+ 2025-09-27 08:55:24,161 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 14/100 | Train Loss: 0.0492 | Val rms_score: 0.3669
34
+ 2025-09-27 08:55:43,936 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 15/100 | Train Loss: 0.0429 | Val rms_score: 0.3651
35
+ 2025-09-27 08:56:04,878 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 16/100 | Train Loss: 0.0396 | Val rms_score: 0.3598
36
+ 2025-09-27 08:56:28,021 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 17/100 | Train Loss: 0.0297 | Val rms_score: 0.3732
37
+ 2025-09-27 08:56:48,480 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 18/100 | Train Loss: 0.0328 | Val rms_score: 0.3612
38
+ 2025-09-27 08:57:09,255 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 19/100 | Train Loss: 0.0301 | Val rms_score: 0.3621
39
+ 2025-09-27 08:57:30,433 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 20/100 | Train Loss: 0.0342 | Val rms_score: 0.3626
40
+ 2025-09-27 08:57:50,622 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 21/100 | Train Loss: 0.0365 | Val rms_score: 0.3676
41
+ 2025-09-27 08:58:14,284 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 22/100 | Train Loss: 0.0357 | Val rms_score: 0.3637
42
+ 2025-09-27 08:58:36,974 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 23/100 | Train Loss: 0.0293 | Val rms_score: 0.3658
43
+ 2025-09-27 08:58:56,668 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 24/100 | Train Loss: 0.0339 | Val rms_score: 0.3610
44
+ 2025-09-27 08:59:17,179 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 25/100 | Train Loss: 0.0238 | Val rms_score: 0.3602
45
+ 2025-09-27 08:59:38,471 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 26/100 | Train Loss: 0.0311 | Val rms_score: 0.3616
46
+ 2025-09-27 09:00:03,955 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 27/100 | Train Loss: 0.0276 | Val rms_score: 0.3630
47
+ 2025-09-27 09:00:24,273 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 28/100 | Train Loss: 0.0361 | Val rms_score: 0.3683
48
+ 2025-09-27 09:00:45,442 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 29/100 | Train Loss: 0.0274 | Val rms_score: 0.3646
49
+ 2025-09-27 09:01:05,576 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 30/100 | Train Loss: 0.0278 | Val rms_score: 0.3649
50
+ 2025-09-27 09:01:28,749 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 31/100 | Train Loss: 0.0227 | Val rms_score: 0.3636
51
+ 2025-09-27 09:01:53,349 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 32/100 | Train Loss: 0.0245 | Val rms_score: 0.3659
52
+ 2025-09-27 09:02:13,070 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 33/100 | Train Loss: 0.0255 | Val rms_score: 0.3570
53
+ 2025-09-27 09:02:34,305 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 34/100 | Train Loss: 0.0254 | Val rms_score: 0.3657
54
+ 2025-09-27 09:02:54,825 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 35/100 | Train Loss: 0.0264 | Val rms_score: 0.3665
55
+ 2025-09-27 09:03:17,705 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 36/100 | Train Loss: 0.0217 | Val rms_score: 0.3638
56
+ 2025-09-27 09:03:39,088 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 37/100 | Train Loss: 0.0410 | Val rms_score: 0.3618
57
+ 2025-09-27 09:04:00,677 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 38/100 | Train Loss: 0.0257 | Val rms_score: 0.3624
58
+ 2025-09-27 09:04:20,237 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 39/100 | Train Loss: 0.0265 | Val rms_score: 0.3651
59
+ 2025-09-27 09:04:40,417 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 40/100 | Train Loss: 0.0245 | Val rms_score: 0.3592
60
+ 2025-09-27 09:05:03,368 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 41/100 | Train Loss: 0.0243 | Val rms_score: 0.3611
61
+ 2025-09-27 09:05:25,373 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 42/100 | Train Loss: 0.0211 | Val rms_score: 0.3625
62
+ 2025-09-27 09:05:45,697 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 43/100 | Train Loss: 0.0222 | Val rms_score: 0.3626
63
+ 2025-09-27 09:06:06,164 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 44/100 | Train Loss: 0.0258 | Val rms_score: 0.3644
64
+ 2025-09-27 09:06:27,318 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 45/100 | Train Loss: 0.0233 | Val rms_score: 0.3642
65
+ 2025-09-27 09:06:51,294 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 46/100 | Train Loss: 0.0224 | Val rms_score: 0.3626
66
+ 2025-09-27 09:07:11,774 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 47/100 | Train Loss: 0.0240 | Val rms_score: 0.3641
67
+ 2025-09-27 09:07:34,444 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 48/100 | Train Loss: 0.0231 | Val rms_score: 0.3649
68
+ 2025-09-27 09:07:56,576 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 49/100 | Train Loss: 0.0231 | Val rms_score: 0.3635
69
+ 2025-09-27 09:08:20,089 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 50/100 | Train Loss: 0.0220 | Val rms_score: 0.3619
70
+ 2025-09-27 09:08:42,739 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 51/100 | Train Loss: 0.0203 | Val rms_score: 0.3617
71
+ 2025-09-27 09:09:07,938 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 52/100 | Train Loss: 0.0246 | Val rms_score: 0.3639
72
+ 2025-09-27 09:09:31,247 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 53/100 | Train Loss: 0.0238 | Val rms_score: 0.3656
73
+ 2025-09-27 09:09:53,355 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 54/100 | Train Loss: 0.0193 | Val rms_score: 0.3645
74
+ 2025-09-27 09:10:15,378 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 55/100 | Train Loss: 0.0230 | Val rms_score: 0.3625
75
+ 2025-09-27 09:10:37,227 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 56/100 | Train Loss: 0.0200 | Val rms_score: 0.3626
76
+ 2025-09-27 09:11:00,362 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 57/100 | Train Loss: 0.0240 | Val rms_score: 0.3613
77
+ 2025-09-27 09:11:24,269 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 58/100 | Train Loss: 0.0196 | Val rms_score: 0.3616
78
+ 2025-09-27 09:11:47,488 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 59/100 | Train Loss: 0.0212 | Val rms_score: 0.3610
79
+ 2025-09-27 09:12:10,163 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 60/100 | Train Loss: 0.0209 | Val rms_score: 0.3620
80
+ 2025-09-27 09:12:33,999 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 61/100 | Train Loss: 0.0206 | Val rms_score: 0.3606
81
+ 2025-09-27 09:12:57,176 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 62/100 | Train Loss: 0.0180 | Val rms_score: 0.3590
82
+ 2025-09-27 09:13:18,878 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 63/100 | Train Loss: 0.0212 | Val rms_score: 0.3613
83
+ 2025-09-27 09:13:43,500 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 64/100 | Train Loss: 0.0215 | Val rms_score: 0.3648
84
+ 2025-09-27 09:14:06,089 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 65/100 | Train Loss: 0.0225 | Val rms_score: 0.3601
85
+ 2025-09-27 09:14:28,426 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 66/100 | Train Loss: 0.0201 | Val rms_score: 0.3619
86
+ 2025-09-27 09:14:50,808 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 67/100 | Train Loss: 0.0190 | Val rms_score: 0.3610
87
+ 2025-09-27 09:15:14,321 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 68/100 | Train Loss: 0.0180 | Val rms_score: 0.3601
88
+ 2025-09-27 09:15:35,949 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 69/100 | Train Loss: 0.0203 | Val rms_score: 0.3598
89
+ 2025-09-27 09:16:00,295 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 70/100 | Train Loss: 0.0170 | Val rms_score: 0.3636
90
+ 2025-09-27 09:16:22,050 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 71/100 | Train Loss: 0.0201 | Val rms_score: 0.3611
91
+ 2025-09-27 09:16:46,575 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 72/100 | Train Loss: 0.0200 | Val rms_score: 0.3609
92
+ 2025-09-27 09:17:08,126 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 73/100 | Train Loss: 0.0174 | Val rms_score: 0.3600
93
+ 2025-09-27 09:17:29,298 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 74/100 | Train Loss: 0.0152 | Val rms_score: 0.3602
94
+ 2025-09-27 09:17:50,620 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 75/100 | Train Loss: 0.0204 | Val rms_score: 0.3627
95
+ 2025-09-27 09:18:15,861 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 76/100 | Train Loss: 0.0195 | Val rms_score: 0.3629
96
+ 2025-09-27 09:18:38,003 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 77/100 | Train Loss: 0.0244 | Val rms_score: 0.3609
97
+ 2025-09-27 09:18:59,635 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 78/100 | Train Loss: 0.0193 | Val rms_score: 0.3595
98
+ 2025-09-27 09:19:22,391 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 79/100 | Train Loss: 0.0192 | Val rms_score: 0.3621
99
+ 2025-09-27 09:19:47,267 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 80/100 | Train Loss: 0.0182 | Val rms_score: 0.3594
100
+ 2025-09-27 09:20:09,155 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 81/100 | Train Loss: 0.0179 | Val rms_score: 0.3569
101
+ 2025-09-27 09:20:33,424 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 82/100 | Train Loss: 0.0150 | Val rms_score: 0.3575
102
+ 2025-09-27 09:20:55,224 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 83/100 | Train Loss: 0.0192 | Val rms_score: 0.3608
103
+ 2025-09-27 09:21:18,144 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 84/100 | Train Loss: 0.0173 | Val rms_score: 0.3591
104
+ 2025-09-27 09:21:39,995 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 85/100 | Train Loss: 0.0174 | Val rms_score: 0.3618
105
+ 2025-09-27 09:22:04,959 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 86/100 | Train Loss: 0.0164 | Val rms_score: 0.3602
106
+ 2025-09-27 09:22:48,847 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 87/100 | Train Loss: 0.0180 | Val rms_score: 0.3597
107
+ 2025-09-27 09:23:24,843 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 88/100 | Train Loss: 0.0159 | Val rms_score: 0.3625
108
+ 2025-09-27 09:23:50,818 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 89/100 | Train Loss: 0.0176 | Val rms_score: 0.3592
109
+ 2025-09-27 09:24:11,799 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 90/100 | Train Loss: 0.0193 | Val rms_score: 0.3596
110
+ 2025-09-27 09:24:36,841 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 91/100 | Train Loss: 0.0250 | Val rms_score: 0.3610
111
+ 2025-09-27 09:25:00,368 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 92/100 | Train Loss: 0.0175 | Val rms_score: 0.3618
112
+ 2025-09-27 09:25:36,707 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 93/100 | Train Loss: 0.0181 | Val rms_score: 0.3587
113
+ 2025-09-27 09:26:16,893 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 94/100 | Train Loss: 0.0203 | Val rms_score: 0.3605
114
+ 2025-09-27 09:26:54,151 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 95/100 | Train Loss: 0.0160 | Val rms_score: 0.3592
115
+ 2025-09-27 09:27:17,610 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 96/100 | Train Loss: 0.0156 | Val rms_score: 0.3619
116
+ 2025-09-27 09:27:40,071 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 97/100 | Train Loss: 0.0195 | Val rms_score: 0.3611
117
+ 2025-09-27 09:28:00,750 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 98/100 | Train Loss: 0.0191 | Val rms_score: 0.3569
118
+ 2025-09-27 09:28:24,277 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 99/100 | Train Loss: 0.0177 | Val rms_score: 0.3594
119
+ 2025-09-27 09:28:46,543 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 100/100 | Train Loss: 0.0163 | Val rms_score: 0.3606
120
+ 2025-09-27 09:28:49,603 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Test rms_score: 0.5063
121
+ 2025-09-27 09:28:50,058 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Starting triplicate run 2 for dataset adme_permeability at 2025-09-27_09-28-50
122
+ 2025-09-27 09:29:12,616 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 1/100 | Train Loss: 0.4462 | Val rms_score: 0.4704
123
+ 2025-09-27 09:29:12,616 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Global step of best model: 265
124
+ 2025-09-27 09:29:10,536 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Best model saved at epoch 1 with val rms_score: 0.4704
125
+ 2025-09-27 09:29:35,167 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 2/100 | Train Loss: 0.3208 | Val rms_score: 0.4036
126
+ 2025-09-27 09:29:35,359 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Global step of best model: 530
127
+ 2025-09-27 09:29:36,029 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Best model saved at epoch 2 with val rms_score: 0.4036
128
+ 2025-09-27 09:29:57,352 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 3/100 | Train Loss: 0.2487 | Val rms_score: 0.4066
129
+ 2025-09-27 09:30:20,263 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 4/100 | Train Loss: 0.1625 | Val rms_score: 0.3776
130
+ 2025-09-27 09:30:20,428 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Global step of best model: 1060
131
+ 2025-09-27 09:30:21,053 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Best model saved at epoch 4 with val rms_score: 0.3776
132
+ 2025-09-27 09:30:42,863 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 5/100 | Train Loss: 0.1306 | Val rms_score: 0.3659
133
+ 2025-09-27 09:30:43,065 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Global step of best model: 1325
134
+ 2025-09-27 09:30:44,012 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Best model saved at epoch 5 with val rms_score: 0.3659
135
+ 2025-09-27 09:31:05,449 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 6/100 | Train Loss: 0.1021 | Val rms_score: 0.3612
136
+ 2025-09-27 09:31:06,132 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Global step of best model: 1590
137
+ 2025-09-27 09:31:06,787 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Best model saved at epoch 6 with val rms_score: 0.3612
138
+ 2025-09-27 09:31:28,820 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 7/100 | Train Loss: 0.0744 | Val rms_score: 0.3751
139
+ 2025-09-27 09:31:54,526 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 8/100 | Train Loss: 0.0660 | Val rms_score: 0.3690
140
+ 2025-09-27 09:32:16,421 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 9/100 | Train Loss: 0.0588 | Val rms_score: 0.3653
141
+ 2025-09-27 09:32:38,132 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 10/100 | Train Loss: 0.0688 | Val rms_score: 0.3833
142
+ 2025-09-27 09:32:59,926 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 11/100 | Train Loss: 0.0367 | Val rms_score: 0.3627
143
+ 2025-09-27 09:33:22,374 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 12/100 | Train Loss: 0.0473 | Val rms_score: 0.3607
144
+ 2025-09-27 09:33:22,545 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Global step of best model: 3180
145
+ 2025-09-27 09:33:23,188 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Best model saved at epoch 12 with val rms_score: 0.3607
146
+ 2025-09-27 09:33:45,286 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 13/100 | Train Loss: 0.0436 | Val rms_score: 0.3755
147
+ 2025-09-27 09:34:09,729 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 14/100 | Train Loss: 0.0303 | Val rms_score: 0.3571
148
+ 2025-09-27 09:34:09,906 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Global step of best model: 3710
149
+ 2025-09-27 09:34:10,599 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Best model saved at epoch 14 with val rms_score: 0.3571
150
+ 2025-09-27 09:34:32,240 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 15/100 | Train Loss: 0.0360 | Val rms_score: 0.3677
151
+ 2025-09-27 09:34:54,990 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 16/100 | Train Loss: 0.0336 | Val rms_score: 0.3596
152
+ 2025-09-27 09:35:31,185 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 17/100 | Train Loss: 0.0287 | Val rms_score: 0.3564
153
+ 2025-09-27 09:35:31,485 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Global step of best model: 4505
154
+ 2025-09-27 09:35:32,593 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Best model saved at epoch 17 with val rms_score: 0.3564
155
+ 2025-09-27 09:36:06,748 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 18/100 | Train Loss: 0.0350 | Val rms_score: 0.3619
156
+ 2025-09-27 09:36:44,261 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 19/100 | Train Loss: 0.0301 | Val rms_score: 0.3563
157
+ 2025-09-27 09:36:44,523 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Global step of best model: 5035
158
+ 2025-09-27 09:36:45,544 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Best model saved at epoch 19 with val rms_score: 0.3563
159
+ 2025-09-27 09:37:17,264 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 20/100 | Train Loss: 0.0342 | Val rms_score: 0.3557
160
+ 2025-09-27 09:37:17,523 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Global step of best model: 5300
161
+ 2025-09-27 09:37:18,446 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Best model saved at epoch 20 with val rms_score: 0.3557
162
+ 2025-09-27 09:37:50,006 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 21/100 | Train Loss: 0.0349 | Val rms_score: 0.3566
163
+ 2025-09-27 09:38:21,147 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 22/100 | Train Loss: 0.0270 | Val rms_score: 0.3581
164
+ 2025-09-27 09:38:59,606 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 23/100 | Train Loss: 0.0299 | Val rms_score: 0.3549
165
+ 2025-09-27 09:38:59,888 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Global step of best model: 6095
166
+ 2025-09-27 09:39:00,954 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Best model saved at epoch 23 with val rms_score: 0.3549
167
+ 2025-09-27 09:39:38,634 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 24/100 | Train Loss: 0.0302 | Val rms_score: 0.3533
168
+ 2025-09-27 09:39:38,907 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Global step of best model: 6360
169
+ 2025-09-27 09:39:39,987 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Best model saved at epoch 24 with val rms_score: 0.3533
170
+ 2025-09-27 09:40:14,635 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 25/100 | Train Loss: 0.0322 | Val rms_score: 0.3574
171
+ 2025-09-27 09:40:51,157 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 26/100 | Train Loss: 0.0293 | Val rms_score: 0.3547
172
+ 2025-09-27 09:41:26,884 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 27/100 | Train Loss: 0.0249 | Val rms_score: 0.3579
173
+ 2025-09-27 09:42:05,407 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 28/100 | Train Loss: 0.0297 | Val rms_score: 0.3573
174
+ 2025-09-27 09:42:42,473 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 29/100 | Train Loss: 0.0254 | Val rms_score: 0.3625
175
+ 2025-09-27 09:43:18,183 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 30/100 | Train Loss: 0.0241 | Val rms_score: 0.3527
176
+ 2025-09-27 09:43:15,773 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Global step of best model: 7950
177
+ 2025-09-27 09:43:16,862 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Best model saved at epoch 30 with val rms_score: 0.3527
178
+ 2025-09-27 09:43:53,770 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 31/100 | Train Loss: 0.0221 | Val rms_score: 0.3584
179
+ 2025-09-27 09:44:28,604 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 32/100 | Train Loss: 0.0243 | Val rms_score: 0.3622
180
+ 2025-09-27 09:45:00,168 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 33/100 | Train Loss: 0.0271 | Val rms_score: 0.3602
181
+ 2025-09-27 09:45:26,960 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 34/100 | Train Loss: 0.0241 | Val rms_score: 0.3579
182
+ 2025-09-27 09:45:51,158 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 35/100 | Train Loss: 0.0242 | Val rms_score: 0.3591
183
+ 2025-09-27 09:46:13,272 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 36/100 | Train Loss: 0.0237 | Val rms_score: 0.3578
184
+ 2025-09-27 09:46:34,453 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 37/100 | Train Loss: 0.0262 | Val rms_score: 0.3603
185
+ 2025-09-27 09:46:55,919 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 38/100 | Train Loss: 0.0230 | Val rms_score: 0.3610
186
+ 2025-09-27 09:47:18,340 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 39/100 | Train Loss: 0.0206 | Val rms_score: 0.3583
187
+ 2025-09-27 09:47:38,853 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 40/100 | Train Loss: 0.0234 | Val rms_score: 0.3527
188
+ 2025-09-27 09:47:39,020 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Global step of best model: 10600
189
+ 2025-09-27 09:47:40,416 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Best model saved at epoch 40 with val rms_score: 0.3527
190
+ 2025-09-27 09:48:00,961 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 41/100 | Train Loss: 0.0208 | Val rms_score: 0.3562
191
+ 2025-09-27 09:48:23,118 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 42/100 | Train Loss: 0.0263 | Val rms_score: 0.3526
192
+ 2025-09-27 09:48:23,283 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Global step of best model: 11130
193
+ 2025-09-27 09:48:23,936 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Best model saved at epoch 42 with val rms_score: 0.3526
194
+ 2025-09-27 09:48:44,115 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 43/100 | Train Loss: 0.0214 | Val rms_score: 0.3545
195
+ 2025-09-27 09:49:07,633 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 44/100 | Train Loss: 0.0191 | Val rms_score: 0.3573
196
+ 2025-09-27 09:49:28,297 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 45/100 | Train Loss: 0.0217 | Val rms_score: 0.3552
197
+ 2025-09-27 09:49:49,484 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 46/100 | Train Loss: 0.0224 | Val rms_score: 0.3582
198
+ 2025-09-27 09:50:11,181 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 47/100 | Train Loss: 0.0200 | Val rms_score: 0.3575
199
+ 2025-09-27 09:50:32,351 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 48/100 | Train Loss: 0.0179 | Val rms_score: 0.3587
200
+ 2025-09-27 09:50:54,963 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 49/100 | Train Loss: 0.0241 | Val rms_score: 0.3603
201
+ 2025-09-27 09:51:15,965 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 50/100 | Train Loss: 0.0214 | Val rms_score: 0.3562
202
+ 2025-09-27 09:51:36,344 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 51/100 | Train Loss: 0.0195 | Val rms_score: 0.3623
203
+ 2025-09-27 09:51:58,869 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 52/100 | Train Loss: 0.0224 | Val rms_score: 0.3574
204
+ 2025-09-27 09:52:21,453 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 53/100 | Train Loss: 0.0190 | Val rms_score: 0.3594
205
+ 2025-09-27 09:52:45,102 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 54/100 | Train Loss: 0.0179 | Val rms_score: 0.3579
206
+ 2025-09-27 09:53:05,913 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 55/100 | Train Loss: 0.0208 | Val rms_score: 0.3583
207
+ 2025-09-27 09:53:25,621 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 56/100 | Train Loss: 0.0195 | Val rms_score: 0.3606
208
+ 2025-09-27 09:53:49,652 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 57/100 | Train Loss: 0.0133 | Val rms_score: 0.3598
209
+ 2025-09-27 09:54:10,777 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 58/100 | Train Loss: 0.0189 | Val rms_score: 0.3606
210
+ 2025-09-27 09:54:35,290 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 59/100 | Train Loss: 0.0186 | Val rms_score: 0.3600
211
+ 2025-09-27 09:54:57,375 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 60/100 | Train Loss: 0.0202 | Val rms_score: 0.3606
212
+ 2025-09-27 09:55:20,282 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 61/100 | Train Loss: 0.0189 | Val rms_score: 0.3562
213
+ 2025-09-27 09:55:41,422 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 62/100 | Train Loss: 0.0177 | Val rms_score: 0.3552
214
+ 2025-09-27 09:56:01,628 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 63/100 | Train Loss: 0.0195 | Val rms_score: 0.3527
215
+ 2025-09-27 09:56:25,903 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 64/100 | Train Loss: 0.0187 | Val rms_score: 0.3572
216
+ 2025-09-27 09:56:47,176 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 65/100 | Train Loss: 0.0200 | Val rms_score: 0.3544
217
+ 2025-09-27 09:57:07,663 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 66/100 | Train Loss: 0.0201 | Val rms_score: 0.3584
218
+ 2025-09-27 09:57:29,137 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 67/100 | Train Loss: 0.0155 | Val rms_score: 0.3550
219
+ 2025-09-27 09:57:50,630 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 68/100 | Train Loss: 0.0151 | Val rms_score: 0.3557
220
+ 2025-09-27 09:58:14,563 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 69/100 | Train Loss: 0.0184 | Val rms_score: 0.3538
221
+ 2025-09-27 09:58:34,601 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 70/100 | Train Loss: 0.0219 | Val rms_score: 0.3579
222
+ 2025-09-27 09:58:54,519 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 71/100 | Train Loss: 0.0165 | Val rms_score: 0.3587
223
+ 2025-09-27 09:59:17,551 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 72/100 | Train Loss: 0.0185 | Val rms_score: 0.3574
224
+ 2025-09-27 09:59:40,032 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 73/100 | Train Loss: 0.0180 | Val rms_score: 0.3570
225
+ 2025-09-27 10:00:00,898 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 74/100 | Train Loss: 0.0236 | Val rms_score: 0.3535
226
+ 2025-09-27 10:00:21,180 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 75/100 | Train Loss: 0.0173 | Val rms_score: 0.3537
227
+ 2025-09-27 10:00:42,786 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 76/100 | Train Loss: 0.0169 | Val rms_score: 0.3546
228
+ 2025-09-27 10:01:03,232 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 77/100 | Train Loss: 0.0105 | Val rms_score: 0.3560
229
+ 2025-09-27 10:01:24,849 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 78/100 | Train Loss: 0.0180 | Val rms_score: 0.3557
230
+ 2025-09-27 10:01:44,763 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 79/100 | Train Loss: 0.0169 | Val rms_score: 0.3554
231
+ 2025-09-27 10:02:05,509 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 80/100 | Train Loss: 0.0191 | Val rms_score: 0.3566
232
+ 2025-09-27 10:02:26,014 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 81/100 | Train Loss: 0.0180 | Val rms_score: 0.3553
233
+ 2025-09-27 10:02:49,610 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 82/100 | Train Loss: 0.0217 | Val rms_score: 0.3557
234
+ 2025-09-27 10:03:10,417 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 83/100 | Train Loss: 0.0172 | Val rms_score: 0.3546
235
+ 2025-09-27 10:03:32,578 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 84/100 | Train Loss: 0.0181 | Val rms_score: 0.3555
236
+ 2025-09-27 10:03:52,627 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 85/100 | Train Loss: 0.0179 | Val rms_score: 0.3561
237
+ 2025-09-27 10:04:13,116 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 86/100 | Train Loss: 0.0177 | Val rms_score: 0.3586
238
+ 2025-09-27 10:04:36,951 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 87/100 | Train Loss: 0.0170 | Val rms_score: 0.3572
239
+ 2025-09-27 10:04:56,141 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 88/100 | Train Loss: 0.0136 | Val rms_score: 0.3585
240
+ 2025-09-27 10:05:15,555 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 89/100 | Train Loss: 0.0193 | Val rms_score: 0.3568
241
+ 2025-09-27 10:05:34,857 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 90/100 | Train Loss: 0.0158 | Val rms_score: 0.3577
242
+ 2025-09-27 10:05:57,971 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 91/100 | Train Loss: 0.0202 | Val rms_score: 0.3576
243
+ 2025-09-27 10:06:17,945 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 92/100 | Train Loss: 0.0155 | Val rms_score: 0.3567
244
+ 2025-09-27 10:06:36,636 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 93/100 | Train Loss: 0.0176 | Val rms_score: 0.3567
245
+ 2025-09-27 10:06:57,069 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 94/100 | Train Loss: 0.0140 | Val rms_score: 0.3593
246
+ 2025-09-27 10:07:21,782 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 95/100 | Train Loss: 0.0165 | Val rms_score: 0.3577
247
+ 2025-09-27 10:07:42,267 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 96/100 | Train Loss: 0.0184 | Val rms_score: 0.3566
248
+ 2025-09-27 10:08:03,459 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 97/100 | Train Loss: 0.0127 | Val rms_score: 0.3552
249
+ 2025-09-27 10:08:23,769 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 98/100 | Train Loss: 0.0158 | Val rms_score: 0.3559
250
+ 2025-09-27 10:08:45,424 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 99/100 | Train Loss: 0.0165 | Val rms_score: 0.3570
251
+ 2025-09-27 10:09:07,814 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 100/100 | Train Loss: 0.0173 | Val rms_score: 0.3543
252
+ 2025-09-27 10:09:09,138 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Test rms_score: 0.4991
253
+ 2025-09-27 10:09:09,657 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Starting triplicate run 3 for dataset adme_permeability at 2025-09-27_10-09-09
254
+ 2025-09-27 10:09:26,733 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 1/100 | Train Loss: 0.4192 | Val rms_score: 0.4647
255
+ 2025-09-27 10:09:26,733 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Global step of best model: 265
256
+ 2025-09-27 10:09:27,803 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Best model saved at epoch 1 with val rms_score: 0.4647
257
+ 2025-09-27 10:09:48,853 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 2/100 | Train Loss: 0.3375 | Val rms_score: 0.3654
258
+ 2025-09-27 10:09:49,051 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Global step of best model: 530
259
+ 2025-09-27 10:09:49,679 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Best model saved at epoch 2 with val rms_score: 0.3654
260
+ 2025-09-27 10:10:10,189 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 3/100 | Train Loss: 0.2395 | Val rms_score: 0.3664
261
+ 2025-09-27 10:10:34,085 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 4/100 | Train Loss: 0.1667 | Val rms_score: 0.3806
262
+ 2025-09-27 10:10:55,303 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 5/100 | Train Loss: 0.1181 | Val rms_score: 0.3626
263
+ 2025-09-27 10:10:55,488 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Global step of best model: 1325
264
+ 2025-09-27 10:10:56,178 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Best model saved at epoch 5 with val rms_score: 0.3626
265
+ 2025-09-27 10:11:17,261 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 6/100 | Train Loss: 0.1007 | Val rms_score: 0.3670
266
+ 2025-09-27 10:11:36,836 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 7/100 | Train Loss: 0.0761 | Val rms_score: 0.3572
267
+ 2025-09-27 10:11:36,998 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Global step of best model: 1855
268
+ 2025-09-27 10:11:37,649 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Best model saved at epoch 7 with val rms_score: 0.3572
269
+ 2025-09-27 10:11:58,506 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 8/100 | Train Loss: 0.0781 | Val rms_score: 0.3690
270
+ 2025-09-27 10:12:20,864 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 9/100 | Train Loss: 0.0574 | Val rms_score: 0.3616
271
+ 2025-09-27 10:12:41,656 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 10/100 | Train Loss: 0.0522 | Val rms_score: 0.3625
272
+ 2025-09-27 10:13:02,843 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 11/100 | Train Loss: 0.0833 | Val rms_score: 0.3565
273
+ 2025-09-27 10:13:03,551 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Global step of best model: 2915
274
+ 2025-09-27 10:13:04,263 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Best model saved at epoch 11 with val rms_score: 0.3565
275
+ 2025-09-27 10:13:25,920 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 12/100 | Train Loss: 0.0467 | Val rms_score: 0.3528
276
+ 2025-09-27 10:13:26,138 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Global step of best model: 3180
277
+ 2025-09-27 10:13:26,785 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Best model saved at epoch 12 with val rms_score: 0.3528
278
+ 2025-09-27 10:13:46,782 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 13/100 | Train Loss: 0.0415 | Val rms_score: 0.3648
279
+ 2025-09-27 10:14:09,732 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 14/100 | Train Loss: 0.0539 | Val rms_score: 0.3625
280
+ 2025-09-27 10:14:28,800 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 15/100 | Train Loss: 0.0383 | Val rms_score: 0.3629
281
+ 2025-09-27 10:14:49,299 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 16/100 | Train Loss: 0.0377 | Val rms_score: 0.3624
282
+ 2025-09-27 10:15:09,806 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 17/100 | Train Loss: 0.0459 | Val rms_score: 0.3566
283
+ 2025-09-27 10:15:32,706 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 18/100 | Train Loss: 0.0375 | Val rms_score: 0.3557
284
+ 2025-09-27 10:15:55,401 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 19/100 | Train Loss: 0.0326 | Val rms_score: 0.3559
285
+ 2025-09-27 10:16:16,502 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 20/100 | Train Loss: 0.0320 | Val rms_score: 0.3530
286
+ 2025-09-27 10:16:39,028 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 21/100 | Train Loss: 0.0288 | Val rms_score: 0.3519
287
+ 2025-09-27 10:16:39,744 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Global step of best model: 5565
288
+ 2025-09-27 10:16:40,416 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Best model saved at epoch 21 with val rms_score: 0.3519
289
+ 2025-09-27 10:17:02,248 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 22/100 | Train Loss: 0.0388 | Val rms_score: 0.3572
290
+ 2025-09-27 10:17:25,388 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 23/100 | Train Loss: 0.0299 | Val rms_score: 0.3581
291
+ 2025-09-27 10:17:49,125 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 24/100 | Train Loss: 0.0292 | Val rms_score: 0.3578
292
+ 2025-09-27 10:18:10,343 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 25/100 | Train Loss: 0.0262 | Val rms_score: 0.3546
293
+ 2025-09-27 10:18:31,631 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 26/100 | Train Loss: 0.0332 | Val rms_score: 0.3557
294
+ 2025-09-27 10:18:54,235 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 27/100 | Train Loss: 0.0250 | Val rms_score: 0.3548
295
+ 2025-09-27 10:19:14,114 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 28/100 | Train Loss: 0.0311 | Val rms_score: 0.3514
296
+ 2025-09-27 10:19:14,291 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Global step of best model: 7420
297
+ 2025-09-27 10:19:14,986 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Best model saved at epoch 28 with val rms_score: 0.3514
298
+ 2025-09-27 10:19:39,875 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 29/100 | Train Loss: 0.0281 | Val rms_score: 0.3544
299
+ 2025-09-27 10:20:01,808 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 30/100 | Train Loss: 0.0272 | Val rms_score: 0.3525
300
+ 2025-09-27 10:20:22,646 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 31/100 | Train Loss: 0.0189 | Val rms_score: 0.3550
301
+ 2025-09-27 10:20:44,143 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 32/100 | Train Loss: 0.0232 | Val rms_score: 0.3537
302
+ 2025-09-27 10:21:04,611 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 33/100 | Train Loss: 0.0241 | Val rms_score: 0.3579
303
+ 2025-09-27 10:21:27,842 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 34/100 | Train Loss: 0.0224 | Val rms_score: 0.3559
304
+ 2025-09-27 10:21:52,069 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 35/100 | Train Loss: 0.0233 | Val rms_score: 0.3575
305
+ 2025-09-27 10:22:12,870 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 36/100 | Train Loss: 0.0246 | Val rms_score: 0.3504
306
+ 2025-09-27 10:22:13,508 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Global step of best model: 9540
307
+ 2025-09-27 10:22:14,247 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Best model saved at epoch 36 with val rms_score: 0.3504
308
+ 2025-09-27 10:22:35,226 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 37/100 | Train Loss: 0.0166 | Val rms_score: 0.3529
309
+ 2025-09-27 10:22:55,867 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 38/100 | Train Loss: 0.0222 | Val rms_score: 0.3538
310
+ 2025-09-27 10:23:18,255 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 39/100 | Train Loss: 0.0222 | Val rms_score: 0.3549
311
+ 2025-09-27 10:23:36,909 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 40/100 | Train Loss: 0.0213 | Val rms_score: 0.3546
312
+ 2025-09-27 10:23:56,699 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 41/100 | Train Loss: 0.0222 | Val rms_score: 0.3520
313
+ 2025-09-27 10:24:19,746 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 42/100 | Train Loss: 0.0203 | Val rms_score: 0.3539
314
+ 2025-09-27 10:24:41,065 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 43/100 | Train Loss: 0.0212 | Val rms_score: 0.3558
315
+ 2025-09-27 10:25:03,355 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 44/100 | Train Loss: 0.0232 | Val rms_score: 0.3546
316
+ 2025-09-27 10:25:22,811 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 45/100 | Train Loss: 0.0256 | Val rms_score: 0.3547
317
+ 2025-09-27 10:25:43,910 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 46/100 | Train Loss: 0.0210 | Val rms_score: 0.3521
318
+ 2025-09-27 10:26:05,100 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 47/100 | Train Loss: 0.0233 | Val rms_score: 0.3526
319
+ 2025-09-27 10:26:29,106 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 48/100 | Train Loss: 0.0250 | Val rms_score: 0.3568
320
+ 2025-09-27 10:26:49,686 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 49/100 | Train Loss: 0.0198 | Val rms_score: 0.3548
321
+ 2025-09-27 10:27:11,841 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 50/100 | Train Loss: 0.0211 | Val rms_score: 0.3561
322
+ 2025-09-27 10:27:31,605 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 51/100 | Train Loss: 0.0247 | Val rms_score: 0.3525
323
+ 2025-09-27 10:27:52,283 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 52/100 | Train Loss: 0.0244 | Val rms_score: 0.3534
324
+ 2025-09-27 10:28:17,308 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 53/100 | Train Loss: 0.0199 | Val rms_score: 0.3527
325
+ 2025-09-27 10:28:41,107 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 54/100 | Train Loss: 0.0128 | Val rms_score: 0.3504
326
+ 2025-09-27 10:28:41,286 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Global step of best model: 14310
327
+ 2025-09-27 10:28:41,984 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Best model saved at epoch 54 with val rms_score: 0.3504
328
+ 2025-09-27 10:29:02,697 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 55/100 | Train Loss: 0.0222 | Val rms_score: 0.3515
329
+ 2025-09-27 10:29:22,922 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 56/100 | Train Loss: 0.0192 | Val rms_score: 0.3520
330
+ 2025-09-27 10:29:45,480 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 57/100 | Train Loss: 0.0270 | Val rms_score: 0.3521
331
+ 2025-09-27 10:30:05,855 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 58/100 | Train Loss: 0.0206 | Val rms_score: 0.3515
332
+ 2025-09-27 10:30:29,221 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 59/100 | Train Loss: 0.0212 | Val rms_score: 0.3536
333
+ 2025-09-27 10:30:51,934 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 60/100 | Train Loss: 0.0183 | Val rms_score: 0.3533
334
+ 2025-09-27 10:31:15,958 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 61/100 | Train Loss: 0.0185 | Val rms_score: 0.3520
335
+ 2025-09-27 10:31:36,096 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 62/100 | Train Loss: 0.0199 | Val rms_score: 0.3530
336
+ 2025-09-27 10:31:57,460 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 63/100 | Train Loss: 0.0196 | Val rms_score: 0.3561
337
+ 2025-09-27 10:32:22,620 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 64/100 | Train Loss: 0.0173 | Val rms_score: 0.3537
338
+ 2025-09-27 10:32:47,283 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 65/100 | Train Loss: 0.0168 | Val rms_score: 0.3543
339
+ 2025-09-27 10:33:08,326 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 66/100 | Train Loss: 0.0190 | Val rms_score: 0.3519
340
+ 2025-09-27 10:33:30,929 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 67/100 | Train Loss: 0.0175 | Val rms_score: 0.3528
341
+ 2025-09-27 10:33:54,234 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 68/100 | Train Loss: 0.0189 | Val rms_score: 0.3521
342
+ 2025-09-27 10:34:16,296 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 69/100 | Train Loss: 0.0183 | Val rms_score: 0.3547
343
+ 2025-09-27 10:34:39,557 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 70/100 | Train Loss: 0.0180 | Val rms_score: 0.3561
344
+ 2025-09-27 10:35:01,022 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 71/100 | Train Loss: 0.0180 | Val rms_score: 0.3530
345
+ 2025-09-27 10:35:24,089 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 72/100 | Train Loss: 0.0201 | Val rms_score: 0.3539
346
+ 2025-09-27 10:35:45,090 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 73/100 | Train Loss: 0.0187 | Val rms_score: 0.3557
347
+ 2025-09-27 10:36:06,134 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 74/100 | Train Loss: 0.0149 | Val rms_score: 0.3545
348
+ 2025-09-27 10:36:27,146 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 75/100 | Train Loss: 0.0193 | Val rms_score: 0.3521
349
+ 2025-09-27 10:36:50,836 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 76/100 | Train Loss: 0.0181 | Val rms_score: 0.3513
350
+ 2025-09-27 10:37:11,315 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 77/100 | Train Loss: 0.0197 | Val rms_score: 0.3538
351
+ 2025-09-27 10:37:32,295 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 78/100 | Train Loss: 0.0162 | Val rms_score: 0.3554
352
+ 2025-09-27 10:37:54,262 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 79/100 | Train Loss: 0.0141 | Val rms_score: 0.3533
353
+ 2025-09-27 10:38:18,446 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 80/100 | Train Loss: 0.0175 | Val rms_score: 0.3510
354
+ 2025-09-27 10:38:39,664 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 81/100 | Train Loss: 0.0163 | Val rms_score: 0.3536
355
+ 2025-09-27 10:39:01,110 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 82/100 | Train Loss: 0.0165 | Val rms_score: 0.3509
356
+ 2025-09-27 10:39:22,243 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 83/100 | Train Loss: 0.0178 | Val rms_score: 0.3512
357
+ 2025-09-27 10:39:44,138 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 84/100 | Train Loss: 0.0180 | Val rms_score: 0.3532
358
+ 2025-09-27 10:40:06,679 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 85/100 | Train Loss: 0.0181 | Val rms_score: 0.3529
359
+ 2025-09-27 10:40:26,895 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 86/100 | Train Loss: 0.0177 | Val rms_score: 0.3527
360
+ 2025-09-27 10:40:48,988 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 87/100 | Train Loss: 0.0206 | Val rms_score: 0.3535
361
+ 2025-09-27 10:41:09,329 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 88/100 | Train Loss: 0.0151 | Val rms_score: 0.3498
362
+ 2025-09-27 10:41:09,517 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Global step of best model: 23320
363
+ 2025-09-27 10:41:10,286 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Best model saved at epoch 88 with val rms_score: 0.3498
364
+ 2025-09-27 10:41:30,646 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 89/100 | Train Loss: 0.0150 | Val rms_score: 0.3520
365
+ 2025-09-27 10:41:54,073 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 90/100 | Train Loss: 0.0169 | Val rms_score: 0.3511
366
+ 2025-09-27 10:42:15,593 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 91/100 | Train Loss: 0.0145 | Val rms_score: 0.3499
367
+ 2025-09-27 10:42:37,590 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 92/100 | Train Loss: 0.0163 | Val rms_score: 0.3497
368
+ 2025-09-27 10:42:37,782 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Global step of best model: 24380
369
+ 2025-09-27 10:42:38,513 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Best model saved at epoch 92 with val rms_score: 0.3497
370
+ 2025-09-27 10:42:58,649 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 93/100 | Train Loss: 0.0168 | Val rms_score: 0.3533
371
+ 2025-09-27 10:43:19,310 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 94/100 | Train Loss: 0.0109 | Val rms_score: 0.3514
372
+ 2025-09-27 10:43:43,953 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 95/100 | Train Loss: 0.0179 | Val rms_score: 0.3513
373
+ 2025-09-27 10:44:04,264 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 96/100 | Train Loss: 0.0160 | Val rms_score: 0.3554
374
+ 2025-09-27 10:44:24,872 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 97/100 | Train Loss: 0.0193 | Val rms_score: 0.3534
375
+ 2025-09-27 10:44:45,744 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 98/100 | Train Loss: 0.0179 | Val rms_score: 0.3540
376
+ 2025-09-27 10:45:09,133 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 99/100 | Train Loss: 0.0171 | Val rms_score: 0.3551
377
+ 2025-09-27 10:45:31,845 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Epoch 100/100 | Train Loss: 0.0170 | Val rms_score: 0.3539
378
+ 2025-09-27 10:45:33,043 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Test rms_score: 0.5023
379
+ 2025-09-27 10:45:33,637 - logs_modchembert_adme_permeability_epochs100_batch_size8 - INFO - Final Triplicate Test Results — Avg rms_score: 0.5025, Std Dev: 0.0029
logs_modchembert_regression_ModChemBERT-MLM-DAPT-TAFT-OPT/modchembert_deepchem_splits_run_adme_ppb_h_epochs100_batch_size32_20250927_084912.log ADDED
@@ -0,0 +1,337 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ 2025-09-27 08:49:12,235 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Running benchmark for dataset: adme_ppb_h
2
+ 2025-09-27 08:49:12,235 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - dataset: adme_ppb_h, tasks: ['y'], epochs: 100, learning rate: 1e-05, transform: True
3
+ 2025-09-27 08:49:12,240 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Starting triplicate run 1 for dataset adme_ppb_h at 2025-09-27_08-49-12
4
+ 2025-09-27 08:49:13,546 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 1/100 | Train Loss: 1.1938 | Val rms_score: 0.5142
5
+ 2025-09-27 08:49:13,546 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Global step of best model: 5
6
+ 2025-09-27 08:49:14,404 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Best model saved at epoch 1 with val rms_score: 0.5142
7
+ 2025-09-27 08:49:16,832 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 2/100 | Train Loss: 1.0125 | Val rms_score: 0.5039
8
+ 2025-09-27 08:49:17,017 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Global step of best model: 10
9
+ 2025-09-27 08:49:17,565 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Best model saved at epoch 2 with val rms_score: 0.5039
10
+ 2025-09-27 08:49:19,793 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 3/100 | Train Loss: 0.8500 | Val rms_score: 0.5073
11
+ 2025-09-27 08:49:22,083 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 4/100 | Train Loss: 0.7500 | Val rms_score: 0.5010
12
+ 2025-09-27 08:49:22,260 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Global step of best model: 20
13
+ 2025-09-27 08:49:22,802 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Best model saved at epoch 4 with val rms_score: 0.5010
14
+ 2025-09-27 08:49:24,948 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 5/100 | Train Loss: 0.6312 | Val rms_score: 0.5009
15
+ 2025-09-27 08:49:25,128 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Global step of best model: 25
16
+ 2025-09-27 08:49:25,655 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Best model saved at epoch 5 with val rms_score: 0.5009
17
+ 2025-09-27 08:49:28,023 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 6/100 | Train Loss: 0.5594 | Val rms_score: 0.5008
18
+ 2025-09-27 08:49:28,457 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Global step of best model: 30
19
+ 2025-09-27 08:49:29,006 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Best model saved at epoch 6 with val rms_score: 0.5008
20
+ 2025-09-27 08:49:31,213 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 7/100 | Train Loss: 0.5031 | Val rms_score: 0.5069
21
+ 2025-09-27 08:49:33,475 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 8/100 | Train Loss: 0.4437 | Val rms_score: 0.5140
22
+ 2025-09-27 08:49:35,483 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 9/100 | Train Loss: 0.4031 | Val rms_score: 0.5217
23
+ 2025-09-27 08:49:37,566 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 10/100 | Train Loss: 0.3594 | Val rms_score: 0.5320
24
+ 2025-09-27 08:49:39,620 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 11/100 | Train Loss: 0.3438 | Val rms_score: 0.5384
25
+ 2025-09-27 08:49:39,313 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 12/100 | Train Loss: 0.3219 | Val rms_score: 0.5447
26
+ 2025-09-27 08:49:41,306 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 13/100 | Train Loss: 0.3094 | Val rms_score: 0.5505
27
+ 2025-09-27 08:49:43,363 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 14/100 | Train Loss: 0.2844 | Val rms_score: 0.5538
28
+ 2025-09-27 08:49:45,387 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 15/100 | Train Loss: 0.2641 | Val rms_score: 0.5559
29
+ 2025-09-27 08:49:47,394 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 16/100 | Train Loss: 0.2703 | Val rms_score: 0.5576
30
+ 2025-09-27 08:49:49,672 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 17/100 | Train Loss: 0.2469 | Val rms_score: 0.5603
31
+ 2025-09-27 08:49:51,724 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 18/100 | Train Loss: 0.2359 | Val rms_score: 0.5606
32
+ 2025-09-27 08:49:53,709 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 19/100 | Train Loss: 0.2344 | Val rms_score: 0.5600
33
+ 2025-09-27 08:49:55,777 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 20/100 | Train Loss: 0.2188 | Val rms_score: 0.5611
34
+ 2025-09-27 08:49:57,841 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 21/100 | Train Loss: 0.2078 | Val rms_score: 0.5606
35
+ 2025-09-27 08:50:00,240 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 22/100 | Train Loss: 0.1844 | Val rms_score: 0.5630
36
+ 2025-09-27 08:50:02,254 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 23/100 | Train Loss: 0.1844 | Val rms_score: 0.5617
37
+ 2025-09-27 08:50:04,467 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 24/100 | Train Loss: 0.1891 | Val rms_score: 0.5630
38
+ 2025-09-27 08:50:06,573 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 25/100 | Train Loss: 0.1727 | Val rms_score: 0.5661
39
+ 2025-09-27 08:50:05,991 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 26/100 | Train Loss: 0.1602 | Val rms_score: 0.5680
40
+ 2025-09-27 08:50:08,669 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 27/100 | Train Loss: 0.1562 | Val rms_score: 0.5693
41
+ 2025-09-27 08:50:11,630 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 28/100 | Train Loss: 0.1430 | Val rms_score: 0.5701
42
+ 2025-09-27 08:50:14,747 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 29/100 | Train Loss: 0.1391 | Val rms_score: 0.5706
43
+ 2025-09-27 08:50:17,796 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 30/100 | Train Loss: 0.1414 | Val rms_score: 0.5712
44
+ 2025-09-27 08:50:20,947 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 31/100 | Train Loss: 0.1289 | Val rms_score: 0.5749
45
+ 2025-09-27 08:50:24,360 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 32/100 | Train Loss: 0.1391 | Val rms_score: 0.5726
46
+ 2025-09-27 08:50:26,964 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 33/100 | Train Loss: 0.1328 | Val rms_score: 0.5735
47
+ 2025-09-27 08:50:29,843 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 34/100 | Train Loss: 0.1219 | Val rms_score: 0.5750
48
+ 2025-09-27 08:50:32,891 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 35/100 | Train Loss: 0.1094 | Val rms_score: 0.5762
49
+ 2025-09-27 08:50:33,078 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 36/100 | Train Loss: 0.1086 | Val rms_score: 0.5798
50
+ 2025-09-27 08:50:36,333 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 37/100 | Train Loss: 0.1055 | Val rms_score: 0.5824
51
+ 2025-09-27 08:50:39,205 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 38/100 | Train Loss: 0.1000 | Val rms_score: 0.5815
52
+ 2025-09-27 08:50:42,181 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 39/100 | Train Loss: 0.0941 | Val rms_score: 0.5823
53
+ 2025-09-27 08:50:44,444 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 40/100 | Train Loss: 0.0934 | Val rms_score: 0.5841
54
+ 2025-09-27 08:50:47,490 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 41/100 | Train Loss: 0.0969 | Val rms_score: 0.5862
55
+ 2025-09-27 08:50:50,536 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 42/100 | Train Loss: 0.0805 | Val rms_score: 0.5832
56
+ 2025-09-27 08:50:53,494 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 43/100 | Train Loss: 0.0852 | Val rms_score: 0.5849
57
+ 2025-09-27 08:50:56,288 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 44/100 | Train Loss: 0.0805 | Val rms_score: 0.5838
58
+ 2025-09-27 08:50:59,393 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 45/100 | Train Loss: 0.0715 | Val rms_score: 0.5867
59
+ 2025-09-27 08:50:59,877 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 46/100 | Train Loss: 0.0758 | Val rms_score: 0.5873
60
+ 2025-09-27 08:51:03,398 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 47/100 | Train Loss: 0.0742 | Val rms_score: 0.5894
61
+ 2025-09-27 08:51:05,787 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 48/100 | Train Loss: 0.0691 | Val rms_score: 0.5910
62
+ 2025-09-27 08:51:08,657 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 49/100 | Train Loss: 0.0664 | Val rms_score: 0.5890
63
+ 2025-09-27 08:51:11,475 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 50/100 | Train Loss: 0.0637 | Val rms_score: 0.5895
64
+ 2025-09-27 08:51:14,605 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 51/100 | Train Loss: 0.0574 | Val rms_score: 0.5882
65
+ 2025-09-27 08:51:17,613 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 52/100 | Train Loss: 0.0590 | Val rms_score: 0.5891
66
+ 2025-09-27 08:51:20,490 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 53/100 | Train Loss: 0.0602 | Val rms_score: 0.5908
67
+ 2025-09-27 08:51:23,296 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 54/100 | Train Loss: 0.0535 | Val rms_score: 0.5940
68
+ 2025-09-27 08:51:26,303 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 55/100 | Train Loss: 0.0598 | Val rms_score: 0.5946
69
+ 2025-09-27 08:51:28,757 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 56/100 | Train Loss: 0.0590 | Val rms_score: 0.5968
70
+ 2025-09-27 08:51:29,157 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 57/100 | Train Loss: 0.0488 | Val rms_score: 0.5967
71
+ 2025-09-27 08:51:32,214 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 58/100 | Train Loss: 0.0508 | Val rms_score: 0.5944
72
+ 2025-09-27 08:51:35,255 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 59/100 | Train Loss: 0.0449 | Val rms_score: 0.5916
73
+ 2025-09-27 08:51:38,108 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 60/100 | Train Loss: 0.0484 | Val rms_score: 0.5901
74
+ 2025-09-27 08:51:41,033 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 61/100 | Train Loss: 0.0480 | Val rms_score: 0.5932
75
+ 2025-09-27 08:51:44,267 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 62/100 | Train Loss: 0.0461 | Val rms_score: 0.5960
76
+ 2025-09-27 08:51:47,147 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 63/100 | Train Loss: 0.0500 | Val rms_score: 0.5963
77
+ 2025-09-27 08:51:49,712 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 64/100 | Train Loss: 0.0396 | Val rms_score: 0.5969
78
+ 2025-09-27 08:51:52,525 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 65/100 | Train Loss: 0.0418 | Val rms_score: 0.5971
79
+ 2025-09-27 08:51:55,300 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 66/100 | Train Loss: 0.0422 | Val rms_score: 0.5976
80
+ 2025-09-27 08:51:55,610 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 67/100 | Train Loss: 0.0437 | Val rms_score: 0.5992
81
+ 2025-09-27 08:51:59,564 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 68/100 | Train Loss: 0.0395 | Val rms_score: 0.5975
82
+ 2025-09-27 08:52:02,407 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 69/100 | Train Loss: 0.0391 | Val rms_score: 0.5947
83
+ 2025-09-27 08:52:05,191 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 70/100 | Train Loss: 0.0363 | Val rms_score: 0.5960
84
+ 2025-09-27 08:52:08,040 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 71/100 | Train Loss: 0.0332 | Val rms_score: 0.5961
85
+ 2025-09-27 08:52:10,646 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 72/100 | Train Loss: 0.0346 | Val rms_score: 0.5958
86
+ 2025-09-27 08:52:13,436 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 73/100 | Train Loss: 0.0400 | Val rms_score: 0.5967
87
+ 2025-09-27 08:52:17,123 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 74/100 | Train Loss: 0.0309 | Val rms_score: 0.5974
88
+ 2025-09-27 08:52:19,860 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 75/100 | Train Loss: 0.0342 | Val rms_score: 0.5962
89
+ 2025-09-27 08:52:22,614 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 76/100 | Train Loss: 0.0348 | Val rms_score: 0.5975
90
+ 2025-09-27 08:52:22,850 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 77/100 | Train Loss: 0.0281 | Val rms_score: 0.5980
91
+ 2025-09-27 08:52:25,571 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 78/100 | Train Loss: 0.0334 | Val rms_score: 0.5966
92
+ 2025-09-27 08:52:28,300 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 79/100 | Train Loss: 0.0367 | Val rms_score: 0.5961
93
+ 2025-09-27 08:52:30,774 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 80/100 | Train Loss: 0.0309 | Val rms_score: 0.5981
94
+ 2025-09-27 08:52:33,637 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 81/100 | Train Loss: 0.0283 | Val rms_score: 0.5984
95
+ 2025-09-27 08:52:36,796 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 82/100 | Train Loss: 0.0295 | Val rms_score: 0.5988
96
+ 2025-09-27 08:52:39,563 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 83/100 | Train Loss: 0.0289 | Val rms_score: 0.5963
97
+ 2025-09-27 08:52:43,226 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 84/100 | Train Loss: 0.0309 | Val rms_score: 0.5973
98
+ 2025-09-27 08:52:45,958 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 85/100 | Train Loss: 0.0262 | Val rms_score: 0.5997
99
+ 2025-09-27 08:52:48,723 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 86/100 | Train Loss: 0.0324 | Val rms_score: 0.6014
100
+ 2025-09-27 08:52:49,185 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 87/100 | Train Loss: 0.0271 | Val rms_score: 0.6013
101
+ 2025-09-27 08:52:51,651 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 88/100 | Train Loss: 0.0220 | Val rms_score: 0.5990
102
+ 2025-09-27 08:52:54,439 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 89/100 | Train Loss: 0.0260 | Val rms_score: 0.5970
103
+ 2025-09-27 08:52:57,475 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 90/100 | Train Loss: 0.0270 | Val rms_score: 0.5976
104
+ 2025-09-27 08:53:00,566 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 91/100 | Train Loss: 0.0245 | Val rms_score: 0.5988
105
+ 2025-09-27 08:53:03,673 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 92/100 | Train Loss: 0.0266 | Val rms_score: 0.6012
106
+ 2025-09-27 08:53:07,391 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 93/100 | Train Loss: 0.0246 | Val rms_score: 0.6015
107
+ 2025-09-27 08:53:10,196 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 94/100 | Train Loss: 0.0273 | Val rms_score: 0.6015
108
+ 2025-09-27 08:53:12,575 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 95/100 | Train Loss: 0.0229 | Val rms_score: 0.6009
109
+ 2025-09-27 08:53:15,555 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 96/100 | Train Loss: 0.0229 | Val rms_score: 0.6000
110
+ 2025-09-27 08:53:15,867 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 97/100 | Train Loss: 0.0229 | Val rms_score: 0.6016
111
+ 2025-09-27 08:53:18,606 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 98/100 | Train Loss: 0.0227 | Val rms_score: 0.6025
112
+ 2025-09-27 08:53:21,601 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 99/100 | Train Loss: 0.0238 | Val rms_score: 0.6004
113
+ 2025-09-27 08:53:24,984 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 100/100 | Train Loss: 0.0215 | Val rms_score: 0.5997
114
+ 2025-09-27 08:53:25,537 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Test rms_score: 0.8736
115
+ 2025-09-27 08:53:25,854 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Starting triplicate run 2 for dataset adme_ppb_h at 2025-09-27_08-53-25
116
+ 2025-09-27 08:53:28,284 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 1/100 | Train Loss: 1.1687 | Val rms_score: 0.5275
117
+ 2025-09-27 08:53:28,284 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Global step of best model: 5
118
+ 2025-09-27 08:53:29,115 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Best model saved at epoch 1 with val rms_score: 0.5275
119
+ 2025-09-27 08:53:32,342 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 2/100 | Train Loss: 0.9938 | Val rms_score: 0.5048
120
+ 2025-09-27 08:53:32,538 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Global step of best model: 10
121
+ 2025-09-27 08:53:33,122 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Best model saved at epoch 2 with val rms_score: 0.5048
122
+ 2025-09-27 08:53:35,952 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 3/100 | Train Loss: 0.8438 | Val rms_score: 0.5000
123
+ 2025-09-27 08:53:36,154 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Global step of best model: 15
124
+ 2025-09-27 08:53:36,776 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Best model saved at epoch 3 with val rms_score: 0.5000
125
+ 2025-09-27 08:53:39,621 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 4/100 | Train Loss: 0.7562 | Val rms_score: 0.5005
126
+ 2025-09-27 08:53:42,417 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 5/100 | Train Loss: 0.6344 | Val rms_score: 0.4971
127
+ 2025-09-27 08:53:42,691 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Global step of best model: 25
128
+ 2025-09-27 08:53:43,289 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Best model saved at epoch 5 with val rms_score: 0.4971
129
+ 2025-09-27 08:53:43,409 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 6/100 | Train Loss: 0.5437 | Val rms_score: 0.4998
130
+ 2025-09-27 08:53:46,710 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 7/100 | Train Loss: 0.5125 | Val rms_score: 0.5080
131
+ 2025-09-27 08:53:49,663 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 8/100 | Train Loss: 0.4344 | Val rms_score: 0.5174
132
+ 2025-09-27 08:53:52,315 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 9/100 | Train Loss: 0.4062 | Val rms_score: 0.5294
133
+ 2025-09-27 08:53:54,631 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 10/100 | Train Loss: 0.3641 | Val rms_score: 0.5343
134
+ 2025-09-27 08:53:57,397 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 11/100 | Train Loss: 0.3453 | Val rms_score: 0.5406
135
+ 2025-09-27 08:54:00,441 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 12/100 | Train Loss: 0.3219 | Val rms_score: 0.5464
136
+ 2025-09-27 08:54:03,245 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 13/100 | Train Loss: 0.2969 | Val rms_score: 0.5509
137
+ 2025-09-27 08:54:05,983 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 14/100 | Train Loss: 0.2875 | Val rms_score: 0.5541
138
+ 2025-09-27 08:54:08,738 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 15/100 | Train Loss: 0.2703 | Val rms_score: 0.5566
139
+ 2025-09-27 08:54:11,464 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 16/100 | Train Loss: 0.2594 | Val rms_score: 0.5598
140
+ 2025-09-27 08:54:12,052 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 17/100 | Train Loss: 0.2391 | Val rms_score: 0.5609
141
+ 2025-09-27 08:54:14,801 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 18/100 | Train Loss: 0.2313 | Val rms_score: 0.5609
142
+ 2025-09-27 08:54:17,555 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 19/100 | Train Loss: 0.2156 | Val rms_score: 0.5629
143
+ 2025-09-27 08:54:20,359 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 20/100 | Train Loss: 0.2000 | Val rms_score: 0.5628
144
+ 2025-09-27 08:54:23,137 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 21/100 | Train Loss: 0.1977 | Val rms_score: 0.5630
145
+ 2025-09-27 08:54:26,281 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 22/100 | Train Loss: 0.1883 | Val rms_score: 0.5654
146
+ 2025-09-27 08:54:29,071 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 23/100 | Train Loss: 0.1859 | Val rms_score: 0.5687
147
+ 2025-09-27 08:54:31,857 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 24/100 | Train Loss: 0.1664 | Val rms_score: 0.5685
148
+ 2025-09-27 08:54:34,841 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 25/100 | Train Loss: 0.1648 | Val rms_score: 0.5685
149
+ 2025-09-27 08:54:37,323 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 26/100 | Train Loss: 0.1602 | Val rms_score: 0.5704
150
+ 2025-09-27 08:54:37,632 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 27/100 | Train Loss: 0.1484 | Val rms_score: 0.5699
151
+ 2025-09-27 08:54:40,456 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 28/100 | Train Loss: 0.1437 | Val rms_score: 0.5719
152
+ 2025-09-27 08:54:43,291 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 29/100 | Train Loss: 0.1375 | Val rms_score: 0.5728
153
+ 2025-09-27 08:54:46,132 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 30/100 | Train Loss: 0.1328 | Val rms_score: 0.5731
154
+ 2025-09-27 08:54:48,997 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 31/100 | Train Loss: 0.1281 | Val rms_score: 0.5742
155
+ 2025-09-27 08:54:51,989 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 32/100 | Train Loss: 0.1219 | Val rms_score: 0.5751
156
+ 2025-09-27 08:54:54,598 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 33/100 | Train Loss: 0.1187 | Val rms_score: 0.5770
157
+ 2025-09-27 08:54:57,334 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 34/100 | Train Loss: 0.1219 | Val rms_score: 0.5829
158
+ 2025-09-27 08:54:59,589 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 35/100 | Train Loss: 0.1148 | Val rms_score: 0.5808
159
+ 2025-09-27 08:55:02,405 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 36/100 | Train Loss: 0.0984 | Val rms_score: 0.5840
160
+ 2025-09-27 08:55:05,431 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 37/100 | Train Loss: 0.1094 | Val rms_score: 0.5851
161
+ 2025-09-27 08:55:05,482 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 38/100 | Train Loss: 0.0922 | Val rms_score: 0.5824
162
+ 2025-09-27 08:55:08,253 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 39/100 | Train Loss: 0.1016 | Val rms_score: 0.5795
163
+ 2025-09-27 08:55:10,965 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 40/100 | Train Loss: 0.0867 | Val rms_score: 0.5829
164
+ 2025-09-27 08:55:13,950 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 41/100 | Train Loss: 0.0922 | Val rms_score: 0.5868
165
+ 2025-09-27 08:55:16,579 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 42/100 | Train Loss: 0.0871 | Val rms_score: 0.5886
166
+ 2025-09-27 08:55:19,411 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 43/100 | Train Loss: 0.0844 | Val rms_score: 0.5879
167
+ 2025-09-27 08:55:22,542 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 44/100 | Train Loss: 0.0715 | Val rms_score: 0.5871
168
+ 2025-09-27 08:55:25,328 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 45/100 | Train Loss: 0.0734 | Val rms_score: 0.5850
169
+ 2025-09-27 08:55:28,074 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 46/100 | Train Loss: 0.0750 | Val rms_score: 0.5869
170
+ 2025-09-27 08:55:31,153 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 47/100 | Train Loss: 0.0746 | Val rms_score: 0.5882
171
+ 2025-09-27 08:55:33,933 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 48/100 | Train Loss: 0.0648 | Val rms_score: 0.5905
172
+ 2025-09-27 08:55:33,924 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 49/100 | Train Loss: 0.0660 | Val rms_score: 0.5912
173
+ 2025-09-27 08:55:36,257 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 50/100 | Train Loss: 0.0625 | Val rms_score: 0.5928
174
+ 2025-09-27 08:55:39,016 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 51/100 | Train Loss: 0.0594 | Val rms_score: 0.5928
175
+ 2025-09-27 08:55:42,087 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 52/100 | Train Loss: 0.0586 | Val rms_score: 0.5941
176
+ 2025-09-27 08:55:44,918 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 53/100 | Train Loss: 0.0543 | Val rms_score: 0.5910
177
+ 2025-09-27 08:55:47,691 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 54/100 | Train Loss: 0.0559 | Val rms_score: 0.5912
178
+ 2025-09-27 08:55:50,418 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 55/100 | Train Loss: 0.0516 | Val rms_score: 0.5911
179
+ 2025-09-27 08:55:53,187 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 56/100 | Train Loss: 0.0570 | Val rms_score: 0.5901
180
+ 2025-09-27 08:55:56,514 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 57/100 | Train Loss: 0.0508 | Val rms_score: 0.5929
181
+ 2025-09-27 08:55:59,078 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 58/100 | Train Loss: 0.0508 | Val rms_score: 0.5907
182
+ 2025-09-27 08:55:59,104 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 59/100 | Train Loss: 0.0492 | Val rms_score: 0.5906
183
+ 2025-09-27 08:56:01,849 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 60/100 | Train Loss: 0.0475 | Val rms_score: 0.5907
184
+ 2025-09-27 08:56:04,632 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 61/100 | Train Loss: 0.0492 | Val rms_score: 0.5914
185
+ 2025-09-27 08:56:07,628 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 62/100 | Train Loss: 0.0437 | Val rms_score: 0.5933
186
+ 2025-09-27 08:56:10,427 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 63/100 | Train Loss: 0.0426 | Val rms_score: 0.5956
187
+ 2025-09-27 08:56:13,135 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 64/100 | Train Loss: 0.0391 | Val rms_score: 0.5960
188
+ 2025-09-27 08:56:15,951 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 65/100 | Train Loss: 0.0375 | Val rms_score: 0.5964
189
+ 2025-09-27 08:56:18,441 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 66/100 | Train Loss: 0.0352 | Val rms_score: 0.5986
190
+ 2025-09-27 08:56:21,525 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 67/100 | Train Loss: 0.0393 | Val rms_score: 0.5991
191
+ 2025-09-27 08:56:24,260 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 68/100 | Train Loss: 0.0322 | Val rms_score: 0.5975
192
+ 2025-09-27 08:56:27,028 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 69/100 | Train Loss: 0.0377 | Val rms_score: 0.5962
193
+ 2025-09-27 08:56:27,075 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 70/100 | Train Loss: 0.0377 | Val rms_score: 0.5960
194
+ 2025-09-27 08:56:30,046 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 71/100 | Train Loss: 0.0357 | Val rms_score: 0.5937
195
+ 2025-09-27 08:56:32,907 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 72/100 | Train Loss: 0.0338 | Val rms_score: 0.5936
196
+ 2025-09-27 08:56:35,816 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 73/100 | Train Loss: 0.0312 | Val rms_score: 0.5944
197
+ 2025-09-27 08:56:38,265 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 74/100 | Train Loss: 0.0346 | Val rms_score: 0.5960
198
+ 2025-09-27 08:56:41,458 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 75/100 | Train Loss: 0.0393 | Val rms_score: 0.5961
199
+ 2025-09-27 08:56:44,305 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 76/100 | Train Loss: 0.0334 | Val rms_score: 0.5936
200
+ 2025-09-27 08:56:47,536 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 77/100 | Train Loss: 0.0338 | Val rms_score: 0.5933
201
+ 2025-09-27 08:56:50,447 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 78/100 | Train Loss: 0.0305 | Val rms_score: 0.5974
202
+ 2025-09-27 08:56:53,210 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 79/100 | Train Loss: 0.0322 | Val rms_score: 0.5965
203
+ 2025-09-27 08:56:53,659 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 80/100 | Train Loss: 0.0275 | Val rms_score: 0.5963
204
+ 2025-09-27 08:56:56,225 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 81/100 | Train Loss: 0.0275 | Val rms_score: 0.5967
205
+ 2025-09-27 08:56:59,299 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 82/100 | Train Loss: 0.0281 | Val rms_score: 0.5966
206
+ 2025-09-27 08:57:03,192 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 83/100 | Train Loss: 0.0264 | Val rms_score: 0.5989
207
+ 2025-09-27 08:57:05,892 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 84/100 | Train Loss: 0.0250 | Val rms_score: 0.6011
208
+ 2025-09-27 08:57:08,607 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 85/100 | Train Loss: 0.0254 | Val rms_score: 0.6015
209
+ 2025-09-27 08:57:11,410 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 86/100 | Train Loss: 0.0281 | Val rms_score: 0.6014
210
+ 2025-09-27 08:57:14,490 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 87/100 | Train Loss: 0.0283 | Val rms_score: 0.5997
211
+ 2025-09-27 08:57:17,309 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 88/100 | Train Loss: 0.0240 | Val rms_score: 0.6007
212
+ 2025-09-27 08:57:20,186 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 89/100 | Train Loss: 0.0244 | Val rms_score: 0.5994
213
+ 2025-09-27 08:57:22,990 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 90/100 | Train Loss: 0.0223 | Val rms_score: 0.5977
214
+ 2025-09-27 08:57:23,074 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 91/100 | Train Loss: 0.0235 | Val rms_score: 0.5988
215
+ 2025-09-27 08:57:26,149 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 92/100 | Train Loss: 0.0199 | Val rms_score: 0.5996
216
+ 2025-09-27 08:57:28,942 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 93/100 | Train Loss: 0.0239 | Val rms_score: 0.6025
217
+ 2025-09-27 08:57:31,994 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 94/100 | Train Loss: 0.0211 | Val rms_score: 0.6016
218
+ 2025-09-27 08:57:34,749 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 95/100 | Train Loss: 0.0188 | Val rms_score: 0.6004
219
+ 2025-09-27 08:57:37,583 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 96/100 | Train Loss: 0.0193 | Val rms_score: 0.6026
220
+ 2025-09-27 08:57:40,487 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 97/100 | Train Loss: 0.0209 | Val rms_score: 0.6027
221
+ 2025-09-27 08:57:43,289 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 98/100 | Train Loss: 0.0238 | Val rms_score: 0.6029
222
+ 2025-09-27 08:57:47,087 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 99/100 | Train Loss: 0.0201 | Val rms_score: 0.6036
223
+ 2025-09-27 08:57:49,911 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 100/100 | Train Loss: 0.0199 | Val rms_score: 0.6030
224
+ 2025-09-27 08:57:50,443 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Test rms_score: 0.8937
225
+ 2025-09-27 08:57:50,666 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Starting triplicate run 3 for dataset adme_ppb_h at 2025-09-27_08-57-50
226
+ 2025-09-27 08:57:50,638 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 1/100 | Train Loss: 1.2437 | Val rms_score: 0.5345
227
+ 2025-09-27 08:57:50,638 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Global step of best model: 5
228
+ 2025-09-27 08:57:51,249 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Best model saved at epoch 1 with val rms_score: 0.5345
229
+ 2025-09-27 08:57:54,357 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 2/100 | Train Loss: 1.0375 | Val rms_score: 0.5126
230
+ 2025-09-27 08:57:54,555 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Global step of best model: 10
231
+ 2025-09-27 08:57:55,196 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Best model saved at epoch 2 with val rms_score: 0.5126
232
+ 2025-09-27 08:57:58,108 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 3/100 | Train Loss: 0.8750 | Val rms_score: 0.5014
233
+ 2025-09-27 08:57:58,310 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Global step of best model: 15
234
+ 2025-09-27 08:57:58,912 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Best model saved at epoch 3 with val rms_score: 0.5014
235
+ 2025-09-27 08:58:01,824 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 4/100 | Train Loss: 0.7500 | Val rms_score: 0.4917
236
+ 2025-09-27 08:58:02,031 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Global step of best model: 20
237
+ 2025-09-27 08:58:02,650 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Best model saved at epoch 4 with val rms_score: 0.4917
238
+ 2025-09-27 08:58:05,416 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 5/100 | Train Loss: 0.6500 | Val rms_score: 0.4888
239
+ 2025-09-27 08:58:05,618 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Global step of best model: 25
240
+ 2025-09-27 08:58:06,211 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Best model saved at epoch 5 with val rms_score: 0.4888
241
+ 2025-09-27 08:58:09,310 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 6/100 | Train Loss: 0.5844 | Val rms_score: 0.4977
242
+ 2025-09-27 08:58:12,715 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 7/100 | Train Loss: 0.5188 | Val rms_score: 0.5034
243
+ 2025-09-27 08:58:15,557 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 8/100 | Train Loss: 0.4844 | Val rms_score: 0.5112
244
+ 2025-09-27 08:58:15,640 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 9/100 | Train Loss: 0.4313 | Val rms_score: 0.5192
245
+ 2025-09-27 08:58:18,530 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 10/100 | Train Loss: 0.3859 | Val rms_score: 0.5262
246
+ 2025-09-27 08:58:21,033 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 11/100 | Train Loss: 0.3578 | Val rms_score: 0.5326
247
+ 2025-09-27 08:58:24,404 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 12/100 | Train Loss: 0.3250 | Val rms_score: 0.5381
248
+ 2025-09-27 08:58:27,280 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 13/100 | Train Loss: 0.2953 | Val rms_score: 0.5417
249
+ 2025-09-27 08:58:30,056 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 14/100 | Train Loss: 0.2891 | Val rms_score: 0.5448
250
+ 2025-09-27 08:58:33,222 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 15/100 | Train Loss: 0.2812 | Val rms_score: 0.5490
251
+ 2025-09-27 08:58:36,129 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 16/100 | Train Loss: 0.2812 | Val rms_score: 0.5533
252
+ 2025-09-27 08:58:39,193 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 17/100 | Train Loss: 0.2531 | Val rms_score: 0.5546
253
+ 2025-09-27 08:58:41,913 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 18/100 | Train Loss: 0.2500 | Val rms_score: 0.5539
254
+ 2025-09-27 08:58:44,555 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 19/100 | Train Loss: 0.2313 | Val rms_score: 0.5546
255
+ 2025-09-27 08:58:44,706 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 20/100 | Train Loss: 0.2250 | Val rms_score: 0.5558
256
+ 2025-09-27 08:58:47,476 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 21/100 | Train Loss: 0.2109 | Val rms_score: 0.5591
257
+ 2025-09-27 08:58:50,506 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 22/100 | Train Loss: 0.2000 | Val rms_score: 0.5601
258
+ 2025-09-27 08:58:53,246 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 23/100 | Train Loss: 0.1922 | Val rms_score: 0.5634
259
+ 2025-09-27 08:58:56,062 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 24/100 | Train Loss: 0.1883 | Val rms_score: 0.5624
260
+ 2025-09-27 08:58:58,847 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 25/100 | Train Loss: 0.1891 | Val rms_score: 0.5625
261
+ 2025-09-27 08:59:01,572 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 26/100 | Train Loss: 0.1695 | Val rms_score: 0.5636
262
+ 2025-09-27 08:59:04,365 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 27/100 | Train Loss: 0.1648 | Val rms_score: 0.5646
263
+ 2025-09-27 08:59:07,193 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 28/100 | Train Loss: 0.1516 | Val rms_score: 0.5668
264
+ 2025-09-27 08:59:09,910 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 29/100 | Train Loss: 0.1477 | Val rms_score: 0.5695
265
+ 2025-09-27 08:59:10,659 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 30/100 | Train Loss: 0.1336 | Val rms_score: 0.5698
266
+ 2025-09-27 08:59:13,491 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 31/100 | Train Loss: 0.1297 | Val rms_score: 0.5702
267
+ 2025-09-27 08:59:16,976 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 32/100 | Train Loss: 0.1359 | Val rms_score: 0.5738
268
+ 2025-09-27 08:59:19,801 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 33/100 | Train Loss: 0.1266 | Val rms_score: 0.5742
269
+ 2025-09-27 08:59:22,392 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 34/100 | Train Loss: 0.1187 | Val rms_score: 0.5756
270
+ 2025-09-27 08:59:24,832 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 35/100 | Train Loss: 0.1078 | Val rms_score: 0.5776
271
+ 2025-09-27 08:59:27,560 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 36/100 | Train Loss: 0.1086 | Val rms_score: 0.5782
272
+ 2025-09-27 08:59:30,642 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 37/100 | Train Loss: 0.1078 | Val rms_score: 0.5797
273
+ 2025-09-27 08:59:33,675 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 38/100 | Train Loss: 0.1016 | Val rms_score: 0.5788
274
+ 2025-09-27 08:59:36,595 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 39/100 | Train Loss: 0.0984 | Val rms_score: 0.5804
275
+ 2025-09-27 08:59:39,615 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 40/100 | Train Loss: 0.0926 | Val rms_score: 0.5842
276
+ 2025-09-27 08:59:39,726 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 41/100 | Train Loss: 0.0848 | Val rms_score: 0.5816
277
+ 2025-09-27 08:59:42,496 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 42/100 | Train Loss: 0.0883 | Val rms_score: 0.5811
278
+ 2025-09-27 08:59:45,415 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 43/100 | Train Loss: 0.0828 | Val rms_score: 0.5812
279
+ 2025-09-27 08:59:48,413 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 44/100 | Train Loss: 0.0844 | Val rms_score: 0.5818
280
+ 2025-09-27 08:59:51,281 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 45/100 | Train Loss: 0.0805 | Val rms_score: 0.5849
281
+ 2025-09-27 08:59:54,151 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 46/100 | Train Loss: 0.0762 | Val rms_score: 0.5834
282
+ 2025-09-27 08:59:57,337 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 47/100 | Train Loss: 0.0715 | Val rms_score: 0.5844
283
+ 2025-09-27 09:00:00,349 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 48/100 | Train Loss: 0.0648 | Val rms_score: 0.5835
284
+ 2025-09-27 09:00:03,128 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 49/100 | Train Loss: 0.0664 | Val rms_score: 0.5839
285
+ 2025-09-27 09:00:05,790 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 50/100 | Train Loss: 0.0652 | Val rms_score: 0.5856
286
+ 2025-09-27 09:00:05,380 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 51/100 | Train Loss: 0.0617 | Val rms_score: 0.5885
287
+ 2025-09-27 09:00:08,463 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 52/100 | Train Loss: 0.0645 | Val rms_score: 0.5868
288
+ 2025-09-27 09:00:11,251 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 53/100 | Train Loss: 0.0625 | Val rms_score: 0.5877
289
+ 2025-09-27 09:00:14,084 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 54/100 | Train Loss: 0.0609 | Val rms_score: 0.5859
290
+ 2025-09-27 09:00:16,849 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 55/100 | Train Loss: 0.0543 | Val rms_score: 0.5888
291
+ 2025-09-27 09:00:19,654 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 56/100 | Train Loss: 0.0547 | Val rms_score: 0.5905
292
+ 2025-09-27 09:00:22,821 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 57/100 | Train Loss: 0.0582 | Val rms_score: 0.5902
293
+ 2025-09-27 09:00:26,255 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 58/100 | Train Loss: 0.0500 | Val rms_score: 0.5918
294
+ 2025-09-27 09:00:28,700 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 59/100 | Train Loss: 0.0475 | Val rms_score: 0.5929
295
+ 2025-09-27 09:00:31,541 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 60/100 | Train Loss: 0.0496 | Val rms_score: 0.5918
296
+ 2025-09-27 09:00:34,224 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 61/100 | Train Loss: 0.0490 | Val rms_score: 0.5906
297
+ 2025-09-27 09:00:34,725 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 62/100 | Train Loss: 0.0416 | Val rms_score: 0.5883
298
+ 2025-09-27 09:00:37,512 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 63/100 | Train Loss: 0.0383 | Val rms_score: 0.5911
299
+ 2025-09-27 09:00:40,355 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 64/100 | Train Loss: 0.0436 | Val rms_score: 0.5955
300
+ 2025-09-27 09:00:43,197 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 65/100 | Train Loss: 0.0430 | Val rms_score: 0.5947
301
+ 2025-09-27 09:00:46,148 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 66/100 | Train Loss: 0.0428 | Val rms_score: 0.5944
302
+ 2025-09-27 09:00:49,013 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 67/100 | Train Loss: 0.0400 | Val rms_score: 0.5936
303
+ 2025-09-27 09:00:51,840 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 68/100 | Train Loss: 0.0406 | Val rms_score: 0.5927
304
+ 2025-09-27 09:00:54,656 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 69/100 | Train Loss: 0.0381 | Val rms_score: 0.5945
305
+ 2025-09-27 09:00:57,495 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 70/100 | Train Loss: 0.0389 | Val rms_score: 0.5968
306
+ 2025-09-27 09:01:00,512 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 71/100 | Train Loss: 0.0383 | Val rms_score: 0.5973
307
+ 2025-09-27 09:01:00,793 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 72/100 | Train Loss: 0.0381 | Val rms_score: 0.5974
308
+ 2025-09-27 09:01:03,566 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 73/100 | Train Loss: 0.0338 | Val rms_score: 0.5972
309
+ 2025-09-27 09:01:06,306 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 74/100 | Train Loss: 0.0354 | Val rms_score: 0.5954
310
+ 2025-09-27 09:01:08,718 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 75/100 | Train Loss: 0.0363 | Val rms_score: 0.5946
311
+ 2025-09-27 09:01:11,334 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 76/100 | Train Loss: 0.0328 | Val rms_score: 0.5939
312
+ 2025-09-27 09:01:14,391 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 77/100 | Train Loss: 0.0309 | Val rms_score: 0.5958
313
+ 2025-09-27 09:01:17,245 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 78/100 | Train Loss: 0.0291 | Val rms_score: 0.5973
314
+ 2025-09-27 09:01:19,997 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 79/100 | Train Loss: 0.0336 | Val rms_score: 0.5972
315
+ 2025-09-27 09:01:22,830 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 80/100 | Train Loss: 0.0303 | Val rms_score: 0.5946
316
+ 2025-09-27 09:01:25,714 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 81/100 | Train Loss: 0.0273 | Val rms_score: 0.5945
317
+ 2025-09-27 09:01:27,002 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 82/100 | Train Loss: 0.0334 | Val rms_score: 0.5943
318
+ 2025-09-27 09:01:31,246 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 83/100 | Train Loss: 0.0355 | Val rms_score: 0.5933
319
+ 2025-09-27 09:01:34,024 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 84/100 | Train Loss: 0.0277 | Val rms_score: 0.5946
320
+ 2025-09-27 09:01:37,255 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 85/100 | Train Loss: 0.0258 | Val rms_score: 0.5967
321
+ 2025-09-27 09:01:40,775 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 86/100 | Train Loss: 0.0291 | Val rms_score: 0.5978
322
+ 2025-09-27 09:01:43,851 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 87/100 | Train Loss: 0.0258 | Val rms_score: 0.5983
323
+ 2025-09-27 09:01:46,632 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 88/100 | Train Loss: 0.0252 | Val rms_score: 0.5975
324
+ 2025-09-27 09:01:49,417 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 89/100 | Train Loss: 0.0260 | Val rms_score: 0.5971
325
+ 2025-09-27 09:01:53,799 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 90/100 | Train Loss: 0.0271 | Val rms_score: 0.5974
326
+ 2025-09-27 09:01:54,270 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 91/100 | Train Loss: 0.0256 | Val rms_score: 0.5979
327
+ 2025-09-27 09:01:58,105 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 92/100 | Train Loss: 0.0225 | Val rms_score: 0.5963
328
+ 2025-09-27 09:02:00,929 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 93/100 | Train Loss: 0.0236 | Val rms_score: 0.5966
329
+ 2025-09-27 09:02:03,740 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 94/100 | Train Loss: 0.0229 | Val rms_score: 0.5984
330
+ 2025-09-27 09:02:06,533 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 95/100 | Train Loss: 0.0239 | Val rms_score: 0.6012
331
+ 2025-09-27 09:02:09,576 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 96/100 | Train Loss: 0.0214 | Val rms_score: 0.5993
332
+ 2025-09-27 09:02:12,671 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 97/100 | Train Loss: 0.0244 | Val rms_score: 0.5962
333
+ 2025-09-27 09:02:15,225 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 98/100 | Train Loss: 0.0216 | Val rms_score: 0.5944
334
+ 2025-09-27 09:02:18,006 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 99/100 | Train Loss: 0.0250 | Val rms_score: 0.5940
335
+ 2025-09-27 09:02:20,910 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 100/100 | Train Loss: 0.0248 | Val rms_score: 0.5959
336
+ 2025-09-27 09:02:21,543 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Test rms_score: 0.9032
337
+ 2025-09-27 09:02:21,860 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Final Triplicate Test Results — Avg rms_score: 0.8901, Std Dev: 0.0123
logs_modchembert_regression_ModChemBERT-MLM-DAPT-TAFT-OPT/modchembert_deepchem_splits_run_adme_ppb_r_epochs100_batch_size32_20250927_153939.log ADDED
@@ -0,0 +1,421 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ 2025-09-27 15:39:39,317 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Running benchmark for dataset: adme_ppb_r
2
+ 2025-09-27 15:39:39,317 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - dataset: adme_ppb_r, tasks: ['y'], epochs: 100, learning rate: 1e-05, transform: True
3
+ 2025-09-27 15:39:39,341 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Starting triplicate run 1 for dataset adme_ppb_r at 2025-09-27_15-39-39
4
+ 2025-09-27 15:39:41,050 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 1/100 | Train Loss: 1.3000 | Val rms_score: 0.6700
5
+ 2025-09-27 15:39:41,050 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Global step of best model: 5
6
+ 2025-09-27 15:39:42,060 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Best model saved at epoch 1 with val rms_score: 0.6700
7
+ 2025-09-27 15:39:44,193 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 2/100 | Train Loss: 1.0375 | Val rms_score: 0.6251
8
+ 2025-09-27 15:39:44,373 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Global step of best model: 10
9
+ 2025-09-27 15:39:44,941 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Best model saved at epoch 2 with val rms_score: 0.6251
10
+ 2025-09-27 15:39:47,036 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 3/100 | Train Loss: 0.8938 | Val rms_score: 0.5922
11
+ 2025-09-27 15:39:47,227 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Global step of best model: 15
12
+ 2025-09-27 15:39:47,793 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Best model saved at epoch 3 with val rms_score: 0.5922
13
+ 2025-09-27 15:39:49,864 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 4/100 | Train Loss: 0.8375 | Val rms_score: 0.5624
14
+ 2025-09-27 15:39:50,057 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Global step of best model: 20
15
+ 2025-09-27 15:39:50,616 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Best model saved at epoch 4 with val rms_score: 0.5624
16
+ 2025-09-27 15:39:52,696 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 5/100 | Train Loss: 0.7625 | Val rms_score: 0.5402
17
+ 2025-09-27 15:39:52,879 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Global step of best model: 25
18
+ 2025-09-27 15:39:53,443 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Best model saved at epoch 5 with val rms_score: 0.5402
19
+ 2025-09-27 15:39:57,110 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 6/100 | Train Loss: 0.6687 | Val rms_score: 0.5255
20
+ 2025-09-27 15:39:57,821 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Global step of best model: 30
21
+ 2025-09-27 15:39:59,383 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Best model saved at epoch 6 with val rms_score: 0.5255
22
+ 2025-09-27 15:40:02,449 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 7/100 | Train Loss: 0.5594 | Val rms_score: 0.5126
23
+ 2025-09-27 15:40:02,643 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Global step of best model: 35
24
+ 2025-09-27 15:40:03,239 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Best model saved at epoch 7 with val rms_score: 0.5126
25
+ 2025-09-27 15:40:05,505 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 8/100 | Train Loss: 0.4906 | Val rms_score: 0.4940
26
+ 2025-09-27 15:40:05,704 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Global step of best model: 40
27
+ 2025-09-27 15:40:06,308 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Best model saved at epoch 8 with val rms_score: 0.4940
28
+ 2025-09-27 15:40:08,463 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 9/100 | Train Loss: 0.4781 | Val rms_score: 0.4885
29
+ 2025-09-27 15:40:08,660 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Global step of best model: 45
30
+ 2025-09-27 15:40:09,251 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Best model saved at epoch 9 with val rms_score: 0.4885
31
+ 2025-09-27 15:40:11,333 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 10/100 | Train Loss: 0.4188 | Val rms_score: 0.4752
32
+ 2025-09-27 15:40:11,537 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Global step of best model: 50
33
+ 2025-09-27 15:40:12,127 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Best model saved at epoch 10 with val rms_score: 0.4752
34
+ 2025-09-27 15:40:14,216 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 11/100 | Train Loss: 0.3906 | Val rms_score: 0.4804
35
+ 2025-09-27 15:40:16,519 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 12/100 | Train Loss: 0.4219 | Val rms_score: 0.4876
36
+ 2025-09-27 15:40:18,487 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 13/100 | Train Loss: 0.3750 | Val rms_score: 0.5210
37
+ 2025-09-27 15:40:20,475 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 14/100 | Train Loss: 0.2969 | Val rms_score: 0.5393
38
+ 2025-09-27 15:40:22,557 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 15/100 | Train Loss: 0.2687 | Val rms_score: 0.4922
39
+ 2025-09-27 15:40:24,585 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 16/100 | Train Loss: 0.2812 | Val rms_score: 0.4468
40
+ 2025-09-27 15:40:25,111 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Global step of best model: 80
41
+ 2025-09-27 15:40:25,726 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Best model saved at epoch 16 with val rms_score: 0.4468
42
+ 2025-09-27 15:40:27,892 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 17/100 | Train Loss: 0.2484 | Val rms_score: 0.4400
43
+ 2025-09-27 15:40:28,093 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Global step of best model: 85
44
+ 2025-09-27 15:40:28,688 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Best model saved at epoch 17 with val rms_score: 0.4400
45
+ 2025-09-27 15:40:30,741 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 18/100 | Train Loss: 0.2359 | Val rms_score: 0.4422
46
+ 2025-09-27 15:40:32,704 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 19/100 | Train Loss: 0.2359 | Val rms_score: 0.4526
47
+ 2025-09-27 15:40:34,726 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 20/100 | Train Loss: 0.1977 | Val rms_score: 0.4612
48
+ 2025-09-27 15:40:36,816 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 21/100 | Train Loss: 0.3187 | Val rms_score: 0.4419
49
+ 2025-09-27 15:40:39,387 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 22/100 | Train Loss: 0.1961 | Val rms_score: 0.4101
50
+ 2025-09-27 15:40:39,589 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Global step of best model: 110
51
+ 2025-09-27 15:40:40,184 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Best model saved at epoch 22 with val rms_score: 0.4101
52
+ 2025-09-27 15:40:42,467 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 23/100 | Train Loss: 0.2062 | Val rms_score: 0.4067
53
+ 2025-09-27 15:40:42,671 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Global step of best model: 115
54
+ 2025-09-27 15:40:43,255 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Best model saved at epoch 23 with val rms_score: 0.4067
55
+ 2025-09-27 15:40:45,649 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 24/100 | Train Loss: 0.2062 | Val rms_score: 0.4199
56
+ 2025-09-27 15:40:47,627 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 25/100 | Train Loss: 0.1547 | Val rms_score: 0.4364
57
+ 2025-09-27 15:40:49,587 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 26/100 | Train Loss: 0.1422 | Val rms_score: 0.4388
58
+ 2025-09-27 15:40:51,943 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 27/100 | Train Loss: 0.1711 | Val rms_score: 0.4272
59
+ 2025-09-27 15:40:53,909 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 28/100 | Train Loss: 0.1523 | Val rms_score: 0.4171
60
+ 2025-09-27 15:40:55,902 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 29/100 | Train Loss: 0.1500 | Val rms_score: 0.4175
61
+ 2025-09-27 15:40:57,914 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 30/100 | Train Loss: 0.2281 | Val rms_score: 0.4318
62
+ 2025-09-27 15:40:59,927 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 31/100 | Train Loss: 0.1250 | Val rms_score: 0.4255
63
+ 2025-09-27 15:41:02,374 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 32/100 | Train Loss: 0.1180 | Val rms_score: 0.4266
64
+ 2025-09-27 15:41:04,370 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 33/100 | Train Loss: 0.1062 | Val rms_score: 0.4349
65
+ 2025-09-27 15:41:06,359 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 34/100 | Train Loss: 0.1023 | Val rms_score: 0.4378
66
+ 2025-09-27 15:41:08,351 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 35/100 | Train Loss: 0.0996 | Val rms_score: 0.4384
67
+ 2025-09-27 15:41:10,339 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 36/100 | Train Loss: 0.1008 | Val rms_score: 0.4257
68
+ 2025-09-27 15:41:12,762 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 37/100 | Train Loss: 0.1008 | Val rms_score: 0.4169
69
+ 2025-09-27 15:41:14,750 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 38/100 | Train Loss: 0.0984 | Val rms_score: 0.4151
70
+ 2025-09-27 15:41:16,740 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 39/100 | Train Loss: 0.0945 | Val rms_score: 0.4186
71
+ 2025-09-27 15:41:18,817 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 40/100 | Train Loss: 0.1047 | Val rms_score: 0.4220
72
+ 2025-09-27 15:41:20,819 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 41/100 | Train Loss: 0.0898 | Val rms_score: 0.4214
73
+ 2025-09-27 15:41:23,321 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 42/100 | Train Loss: 0.0777 | Val rms_score: 0.4296
74
+ 2025-09-27 15:41:25,969 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 43/100 | Train Loss: 0.0789 | Val rms_score: 0.4454
75
+ 2025-09-27 15:41:28,274 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 44/100 | Train Loss: 0.0758 | Val rms_score: 0.4581
76
+ 2025-09-27 15:41:30,313 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 45/100 | Train Loss: 0.1125 | Val rms_score: 0.4503
77
+ 2025-09-27 15:41:32,393 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 46/100 | Train Loss: 0.0707 | Val rms_score: 0.4310
78
+ 2025-09-27 15:41:34,765 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 47/100 | Train Loss: 0.0770 | Val rms_score: 0.4387
79
+ 2025-09-27 15:41:36,838 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 48/100 | Train Loss: 0.0672 | Val rms_score: 0.4619
80
+ 2025-09-27 15:41:38,811 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 49/100 | Train Loss: 0.0727 | Val rms_score: 0.4648
81
+ 2025-09-27 15:41:40,863 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 50/100 | Train Loss: 0.0508 | Val rms_score: 0.4696
82
+ 2025-09-27 15:41:42,850 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 51/100 | Train Loss: 0.0637 | Val rms_score: 0.4698
83
+ 2025-09-27 15:41:45,262 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 52/100 | Train Loss: 0.0555 | Val rms_score: 0.4640
84
+ 2025-09-27 15:41:47,241 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 53/100 | Train Loss: 0.0527 | Val rms_score: 0.4601
85
+ 2025-09-27 15:41:49,235 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 54/100 | Train Loss: 0.0527 | Val rms_score: 0.4617
86
+ 2025-09-27 15:41:51,490 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 55/100 | Train Loss: 0.0504 | Val rms_score: 0.4600
87
+ 2025-09-27 15:41:53,471 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 56/100 | Train Loss: 0.0547 | Val rms_score: 0.4552
88
+ 2025-09-27 15:41:55,892 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 57/100 | Train Loss: 0.0516 | Val rms_score: 0.4583
89
+ 2025-09-27 15:41:57,901 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 58/100 | Train Loss: 0.0520 | Val rms_score: 0.4647
90
+ 2025-09-27 15:41:59,988 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 59/100 | Train Loss: 0.0451 | Val rms_score: 0.4737
91
+ 2025-09-27 15:42:02,008 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 60/100 | Train Loss: 0.0461 | Val rms_score: 0.4702
92
+ 2025-09-27 15:42:04,019 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 61/100 | Train Loss: 0.0355 | Val rms_score: 0.4634
93
+ 2025-09-27 15:42:06,422 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 62/100 | Train Loss: 0.0445 | Val rms_score: 0.4557
94
+ 2025-09-27 15:42:09,788 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 63/100 | Train Loss: 0.0426 | Val rms_score: 0.4559
95
+ 2025-09-27 15:42:11,855 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 64/100 | Train Loss: 0.0334 | Val rms_score: 0.4596
96
+ 2025-09-27 15:42:14,834 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 65/100 | Train Loss: 0.0391 | Val rms_score: 0.4647
97
+ 2025-09-27 15:42:16,869 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 66/100 | Train Loss: 0.0350 | Val rms_score: 0.4638
98
+ 2025-09-27 15:42:19,243 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 67/100 | Train Loss: 0.0396 | Val rms_score: 0.4612
99
+ 2025-09-27 15:42:21,253 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 68/100 | Train Loss: 0.0354 | Val rms_score: 0.4589
100
+ 2025-09-27 15:42:23,256 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 69/100 | Train Loss: 0.0279 | Val rms_score: 0.4644
101
+ 2025-09-27 15:42:25,262 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 70/100 | Train Loss: 0.0312 | Val rms_score: 0.4684
102
+ 2025-09-27 15:42:27,285 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 71/100 | Train Loss: 0.0369 | Val rms_score: 0.4666
103
+ 2025-09-27 15:42:29,703 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 72/100 | Train Loss: 0.0395 | Val rms_score: 0.4594
104
+ 2025-09-27 15:42:31,801 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 73/100 | Train Loss: 0.0340 | Val rms_score: 0.4638
105
+ 2025-09-27 15:42:33,802 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 74/100 | Train Loss: 0.0305 | Val rms_score: 0.4711
106
+ 2025-09-27 15:42:35,783 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 75/100 | Train Loss: 0.0305 | Val rms_score: 0.4744
107
+ 2025-09-27 15:42:37,755 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 76/100 | Train Loss: 0.0293 | Val rms_score: 0.4740
108
+ 2025-09-27 15:42:40,180 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 77/100 | Train Loss: 0.0287 | Val rms_score: 0.4793
109
+ 2025-09-27 15:42:42,196 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 78/100 | Train Loss: 0.0258 | Val rms_score: 0.4927
110
+ 2025-09-27 15:42:44,259 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 79/100 | Train Loss: 0.0248 | Val rms_score: 0.4899
111
+ 2025-09-27 15:42:46,460 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 80/100 | Train Loss: 0.0256 | Val rms_score: 0.4813
112
+ 2025-09-27 15:42:48,543 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 81/100 | Train Loss: 0.0203 | Val rms_score: 0.4755
113
+ 2025-09-27 15:42:50,975 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 82/100 | Train Loss: 0.0268 | Val rms_score: 0.4787
114
+ 2025-09-27 15:42:52,986 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 83/100 | Train Loss: 0.0283 | Val rms_score: 0.4911
115
+ 2025-09-27 15:42:54,988 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 84/100 | Train Loss: 0.0262 | Val rms_score: 0.5028
116
+ 2025-09-27 15:42:56,983 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 85/100 | Train Loss: 0.0236 | Val rms_score: 0.5022
117
+ 2025-09-27 15:42:58,972 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 86/100 | Train Loss: 0.0312 | Val rms_score: 0.4948
118
+ 2025-09-27 15:43:01,365 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 87/100 | Train Loss: 0.0236 | Val rms_score: 0.4865
119
+ 2025-09-27 15:43:03,401 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 88/100 | Train Loss: 0.0229 | Val rms_score: 0.4850
120
+ 2025-09-27 15:43:05,400 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 89/100 | Train Loss: 0.0234 | Val rms_score: 0.4876
121
+ 2025-09-27 15:43:07,404 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 90/100 | Train Loss: 0.0234 | Val rms_score: 0.4906
122
+ 2025-09-27 15:43:09,473 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 91/100 | Train Loss: 0.0223 | Val rms_score: 0.4934
123
+ 2025-09-27 15:43:11,845 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 92/100 | Train Loss: 0.0204 | Val rms_score: 0.4954
124
+ 2025-09-27 15:43:13,857 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 93/100 | Train Loss: 0.0226 | Val rms_score: 0.4988
125
+ 2025-09-27 15:43:15,855 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 94/100 | Train Loss: 0.0192 | Val rms_score: 0.4941
126
+ 2025-09-27 15:43:17,871 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 95/100 | Train Loss: 0.0206 | Val rms_score: 0.4912
127
+ 2025-09-27 15:43:19,875 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 96/100 | Train Loss: 0.0203 | Val rms_score: 0.4915
128
+ 2025-09-27 15:43:22,303 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 97/100 | Train Loss: 0.0196 | Val rms_score: 0.4865
129
+ 2025-09-27 15:43:24,377 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 98/100 | Train Loss: 0.0231 | Val rms_score: 0.4846
130
+ 2025-09-27 15:43:27,176 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 99/100 | Train Loss: 0.0234 | Val rms_score: 0.4980
131
+ 2025-09-27 15:43:32,557 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 100/100 | Train Loss: 0.0177 | Val rms_score: 0.4997
132
+ 2025-09-27 15:43:33,112 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Test rms_score: 0.7395
133
+ 2025-09-27 15:43:33,477 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Starting triplicate run 2 for dataset adme_ppb_r at 2025-09-27_15-43-33
134
+ 2025-09-27 15:43:35,237 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 1/100 | Train Loss: 1.3687 | Val rms_score: 0.6788
135
+ 2025-09-27 15:43:35,237 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Global step of best model: 5
136
+ 2025-09-27 15:43:35,937 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Best model saved at epoch 1 with val rms_score: 0.6788
137
+ 2025-09-27 15:43:38,826 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 2/100 | Train Loss: 1.1313 | Val rms_score: 0.6369
138
+ 2025-09-27 15:43:39,008 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Global step of best model: 10
139
+ 2025-09-27 15:43:39,591 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Best model saved at epoch 2 with val rms_score: 0.6369
140
+ 2025-09-27 15:43:41,723 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 3/100 | Train Loss: 1.1938 | Val rms_score: 0.6122
141
+ 2025-09-27 15:43:41,914 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Global step of best model: 15
142
+ 2025-09-27 15:43:42,507 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Best model saved at epoch 3 with val rms_score: 0.6122
143
+ 2025-09-27 15:43:44,573 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 4/100 | Train Loss: 0.8750 | Val rms_score: 0.5824
144
+ 2025-09-27 15:43:44,767 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Global step of best model: 20
145
+ 2025-09-27 15:43:45,380 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Best model saved at epoch 4 with val rms_score: 0.5824
146
+ 2025-09-27 15:43:48,783 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 5/100 | Train Loss: 0.9437 | Val rms_score: 0.5562
147
+ 2025-09-27 15:43:48,982 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Global step of best model: 25
148
+ 2025-09-27 15:43:49,632 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Best model saved at epoch 5 with val rms_score: 0.5562
149
+ 2025-09-27 15:43:52,311 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 6/100 | Train Loss: 0.8063 | Val rms_score: 0.5336
150
+ 2025-09-27 15:43:52,829 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Global step of best model: 30
151
+ 2025-09-27 15:43:53,419 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Best model saved at epoch 6 with val rms_score: 0.5336
152
+ 2025-09-27 15:43:55,575 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 7/100 | Train Loss: 0.7688 | Val rms_score: 0.5129
153
+ 2025-09-27 15:43:55,770 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Global step of best model: 35
154
+ 2025-09-27 15:43:56,353 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Best model saved at epoch 7 with val rms_score: 0.5129
155
+ 2025-09-27 15:43:58,496 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 8/100 | Train Loss: 0.7188 | Val rms_score: 0.4967
156
+ 2025-09-27 15:43:58,703 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Global step of best model: 40
157
+ 2025-09-27 15:43:59,324 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Best model saved at epoch 8 with val rms_score: 0.4967
158
+ 2025-09-27 15:44:01,462 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 9/100 | Train Loss: 0.5656 | Val rms_score: 0.4956
159
+ 2025-09-27 15:44:01,653 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Global step of best model: 45
160
+ 2025-09-27 15:44:02,224 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Best model saved at epoch 9 with val rms_score: 0.4956
161
+ 2025-09-27 15:44:04,299 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 10/100 | Train Loss: 0.6188 | Val rms_score: 0.4863
162
+ 2025-09-27 15:44:04,520 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Global step of best model: 50
163
+ 2025-09-27 15:44:05,095 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Best model saved at epoch 10 with val rms_score: 0.4863
164
+ 2025-09-27 15:44:07,232 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 11/100 | Train Loss: 0.5156 | Val rms_score: 0.4906
165
+ 2025-09-27 15:44:10,695 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 12/100 | Train Loss: 0.4813 | Val rms_score: 0.4827
166
+ 2025-09-27 15:44:10,890 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Global step of best model: 60
167
+ 2025-09-27 15:44:11,501 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Best model saved at epoch 12 with val rms_score: 0.4827
168
+ 2025-09-27 15:44:13,738 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 13/100 | Train Loss: 0.3719 | Val rms_score: 0.4692
169
+ 2025-09-27 15:44:13,935 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Global step of best model: 65
170
+ 2025-09-27 15:44:14,542 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Best model saved at epoch 13 with val rms_score: 0.4692
171
+ 2025-09-27 15:44:16,653 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 14/100 | Train Loss: 0.3484 | Val rms_score: 0.4531
172
+ 2025-09-27 15:44:16,844 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Global step of best model: 70
173
+ 2025-09-27 15:44:17,444 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Best model saved at epoch 14 with val rms_score: 0.4531
174
+ 2025-09-27 15:44:19,640 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 15/100 | Train Loss: 0.3156 | Val rms_score: 0.4465
175
+ 2025-09-27 15:44:19,849 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Global step of best model: 75
176
+ 2025-09-27 15:44:20,439 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Best model saved at epoch 15 with val rms_score: 0.4465
177
+ 2025-09-27 15:44:22,736 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 16/100 | Train Loss: 0.3266 | Val rms_score: 0.4361
178
+ 2025-09-27 15:44:23,255 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Global step of best model: 80
179
+ 2025-09-27 15:44:23,833 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Best model saved at epoch 16 with val rms_score: 0.4361
180
+ 2025-09-27 15:44:25,939 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 17/100 | Train Loss: 0.2797 | Val rms_score: 0.4259
181
+ 2025-09-27 15:44:26,157 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Global step of best model: 85
182
+ 2025-09-27 15:44:26,770 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Best model saved at epoch 17 with val rms_score: 0.4259
183
+ 2025-09-27 15:44:28,846 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 18/100 | Train Loss: 0.2578 | Val rms_score: 0.4310
184
+ 2025-09-27 15:44:31,016 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 19/100 | Train Loss: 0.2594 | Val rms_score: 0.4347
185
+ 2025-09-27 15:44:33,090 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 20/100 | Train Loss: 0.2437 | Val rms_score: 0.4410
186
+ 2025-09-27 15:44:35,109 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 21/100 | Train Loss: 0.2406 | Val rms_score: 0.4416
187
+ 2025-09-27 15:44:37,750 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 22/100 | Train Loss: 0.2250 | Val rms_score: 0.4351
188
+ 2025-09-27 15:44:39,757 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 23/100 | Train Loss: 0.2047 | Val rms_score: 0.4296
189
+ 2025-09-27 15:44:41,741 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 24/100 | Train Loss: 0.2188 | Val rms_score: 0.4273
190
+ 2025-09-27 15:44:43,784 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 25/100 | Train Loss: 0.1914 | Val rms_score: 0.4156
191
+ 2025-09-27 15:44:43,974 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Global step of best model: 125
192
+ 2025-09-27 15:44:44,571 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Best model saved at epoch 25 with val rms_score: 0.4156
193
+ 2025-09-27 15:44:47,051 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 26/100 | Train Loss: 0.2234 | Val rms_score: 0.4082
194
+ 2025-09-27 15:44:47,599 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Global step of best model: 130
195
+ 2025-09-27 15:44:48,205 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Best model saved at epoch 26 with val rms_score: 0.4082
196
+ 2025-09-27 15:44:50,478 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 27/100 | Train Loss: 0.2031 | Val rms_score: 0.4067
197
+ 2025-09-27 15:44:50,675 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Global step of best model: 135
198
+ 2025-09-27 15:44:51,288 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Best model saved at epoch 27 with val rms_score: 0.4067
199
+ 2025-09-27 15:44:53,439 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 28/100 | Train Loss: 0.1859 | Val rms_score: 0.4145
200
+ 2025-09-27 15:44:55,464 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 29/100 | Train Loss: 0.1617 | Val rms_score: 0.4241
201
+ 2025-09-27 15:44:57,479 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 30/100 | Train Loss: 0.1695 | Val rms_score: 0.4277
202
+ 2025-09-27 15:44:59,489 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 31/100 | Train Loss: 0.1797 | Val rms_score: 0.4140
203
+ 2025-09-27 15:45:01,845 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 32/100 | Train Loss: 0.1766 | Val rms_score: 0.3973
204
+ 2025-09-27 15:45:02,038 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Global step of best model: 160
205
+ 2025-09-27 15:45:02,644 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Best model saved at epoch 32 with val rms_score: 0.3973
206
+ 2025-09-27 15:45:04,750 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 33/100 | Train Loss: 0.1688 | Val rms_score: 0.4023
207
+ 2025-09-27 15:45:07,128 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 34/100 | Train Loss: 0.1375 | Val rms_score: 0.4130
208
+ 2025-09-27 15:45:09,122 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 35/100 | Train Loss: 0.1422 | Val rms_score: 0.4236
209
+ 2025-09-27 15:45:11,231 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 36/100 | Train Loss: 0.1313 | Val rms_score: 0.4261
210
+ 2025-09-27 15:45:13,600 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 37/100 | Train Loss: 0.1445 | Val rms_score: 0.4163
211
+ 2025-09-27 15:45:15,622 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 38/100 | Train Loss: 0.1469 | Val rms_score: 0.3962
212
+ 2025-09-27 15:45:15,814 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Global step of best model: 190
213
+ 2025-09-27 15:45:16,419 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Best model saved at epoch 38 with val rms_score: 0.3962
214
+ 2025-09-27 15:45:18,836 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 39/100 | Train Loss: 0.1547 | Val rms_score: 0.3985
215
+ 2025-09-27 15:45:20,830 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 40/100 | Train Loss: 0.1172 | Val rms_score: 0.4035
216
+ 2025-09-27 15:45:22,818 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 41/100 | Train Loss: 0.1117 | Val rms_score: 0.4092
217
+ 2025-09-27 15:45:25,215 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 42/100 | Train Loss: 0.1047 | Val rms_score: 0.4154
218
+ 2025-09-27 15:45:27,477 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 43/100 | Train Loss: 0.1047 | Val rms_score: 0.4250
219
+ 2025-09-27 15:45:29,478 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 44/100 | Train Loss: 0.1023 | Val rms_score: 0.4179
220
+ 2025-09-27 15:45:31,612 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 45/100 | Train Loss: 0.1078 | Val rms_score: 0.4091
221
+ 2025-09-27 15:45:34,007 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 46/100 | Train Loss: 0.1109 | Val rms_score: 0.4096
222
+ 2025-09-27 15:45:36,492 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 47/100 | Train Loss: 0.1047 | Val rms_score: 0.4207
223
+ 2025-09-27 15:45:38,526 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 48/100 | Train Loss: 0.1039 | Val rms_score: 0.4166
224
+ 2025-09-27 15:45:40,511 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 49/100 | Train Loss: 0.0957 | Val rms_score: 0.4185
225
+ 2025-09-27 15:45:42,569 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 50/100 | Train Loss: 0.0816 | Val rms_score: 0.4194
226
+ 2025-09-27 15:45:44,521 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 51/100 | Train Loss: 0.0809 | Val rms_score: 0.4224
227
+ 2025-09-27 15:45:46,901 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 52/100 | Train Loss: 0.0773 | Val rms_score: 0.4253
228
+ 2025-09-27 15:45:48,888 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 53/100 | Train Loss: 0.0906 | Val rms_score: 0.4230
229
+ 2025-09-27 15:45:50,890 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 54/100 | Train Loss: 0.0703 | Val rms_score: 0.4214
230
+ 2025-09-27 15:45:53,375 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 55/100 | Train Loss: 0.0758 | Val rms_score: 0.4210
231
+ 2025-09-27 15:45:55,323 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 56/100 | Train Loss: 0.0789 | Val rms_score: 0.4321
232
+ 2025-09-27 15:45:57,657 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 57/100 | Train Loss: 0.0773 | Val rms_score: 0.4414
233
+ 2025-09-27 15:45:59,678 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 58/100 | Train Loss: 0.0691 | Val rms_score: 0.4214
234
+ 2025-09-27 15:46:01,759 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 59/100 | Train Loss: 0.0570 | Val rms_score: 0.4058
235
+ 2025-09-27 15:46:03,754 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 60/100 | Train Loss: 0.0578 | Val rms_score: 0.4110
236
+ 2025-09-27 15:46:05,754 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 61/100 | Train Loss: 0.0559 | Val rms_score: 0.4253
237
+ 2025-09-27 15:46:08,587 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 62/100 | Train Loss: 0.0633 | Val rms_score: 0.4350
238
+ 2025-09-27 15:46:10,715 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 63/100 | Train Loss: 0.0602 | Val rms_score: 0.4401
239
+ 2025-09-27 15:46:13,101 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 64/100 | Train Loss: 0.0555 | Val rms_score: 0.4435
240
+ 2025-09-27 15:46:15,106 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 65/100 | Train Loss: 0.0523 | Val rms_score: 0.4368
241
+ 2025-09-27 15:46:17,162 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 66/100 | Train Loss: 0.0461 | Val rms_score: 0.4250
242
+ 2025-09-27 15:46:19,496 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 67/100 | Train Loss: 0.0449 | Val rms_score: 0.4188
243
+ 2025-09-27 15:46:21,492 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 68/100 | Train Loss: 0.0668 | Val rms_score: 0.4208
244
+ 2025-09-27 15:46:23,537 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 69/100 | Train Loss: 0.0586 | Val rms_score: 0.4293
245
+ 2025-09-27 15:46:25,508 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 70/100 | Train Loss: 0.0396 | Val rms_score: 0.4378
246
+ 2025-09-27 15:46:27,639 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 71/100 | Train Loss: 0.0402 | Val rms_score: 0.4461
247
+ 2025-09-27 15:46:30,745 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 72/100 | Train Loss: 0.0412 | Val rms_score: 0.4548
248
+ 2025-09-27 15:46:32,770 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 73/100 | Train Loss: 0.0420 | Val rms_score: 0.4550
249
+ 2025-09-27 15:46:35,114 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 74/100 | Train Loss: 0.0453 | Val rms_score: 0.4439
250
+ 2025-09-27 15:46:37,191 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 75/100 | Train Loss: 0.0465 | Val rms_score: 0.4355
251
+ 2025-09-27 15:46:39,212 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 76/100 | Train Loss: 0.0387 | Val rms_score: 0.4559
252
+ 2025-09-27 15:46:41,563 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 77/100 | Train Loss: 0.0311 | Val rms_score: 0.4590
253
+ 2025-09-27 15:46:43,628 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 78/100 | Train Loss: 0.0264 | Val rms_score: 0.4608
254
+ 2025-09-27 15:46:45,615 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 79/100 | Train Loss: 0.0359 | Val rms_score: 0.4607
255
+ 2025-09-27 15:46:48,043 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 80/100 | Train Loss: 0.0309 | Val rms_score: 0.4605
256
+ 2025-09-27 15:46:50,031 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 81/100 | Train Loss: 0.0324 | Val rms_score: 0.4564
257
+ 2025-09-27 15:46:52,429 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 82/100 | Train Loss: 0.0367 | Val rms_score: 0.4517
258
+ 2025-09-27 15:46:54,460 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 83/100 | Train Loss: 0.0371 | Val rms_score: 0.4583
259
+ 2025-09-27 15:46:56,493 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 84/100 | Train Loss: 0.0277 | Val rms_score: 0.4713
260
+ 2025-09-27 15:46:58,512 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 85/100 | Train Loss: 0.0340 | Val rms_score: 0.4789
261
+ 2025-09-27 15:47:00,589 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 86/100 | Train Loss: 0.0301 | Val rms_score: 0.4827
262
+ 2025-09-27 15:47:03,070 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 87/100 | Train Loss: 0.0305 | Val rms_score: 0.4738
263
+ 2025-09-27 15:47:05,113 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 88/100 | Train Loss: 0.0395 | Val rms_score: 0.4610
264
+ 2025-09-27 15:47:07,175 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 89/100 | Train Loss: 0.0291 | Val rms_score: 0.4521
265
+ 2025-09-27 15:47:09,272 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 90/100 | Train Loss: 0.0375 | Val rms_score: 0.4459
266
+ 2025-09-27 15:47:11,316 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 91/100 | Train Loss: 0.0258 | Val rms_score: 0.4475
267
+ 2025-09-27 15:47:13,717 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 92/100 | Train Loss: 0.0320 | Val rms_score: 0.4566
268
+ 2025-09-27 15:47:15,707 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 93/100 | Train Loss: 0.0270 | Val rms_score: 0.4668
269
+ 2025-09-27 15:47:17,739 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 94/100 | Train Loss: 0.0324 | Val rms_score: 0.4734
270
+ 2025-09-27 15:47:19,723 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 95/100 | Train Loss: 0.0238 | Val rms_score: 0.4648
271
+ 2025-09-27 15:47:21,781 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 96/100 | Train Loss: 0.0243 | Val rms_score: 0.4617
272
+ 2025-09-27 15:47:24,248 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 97/100 | Train Loss: 0.0262 | Val rms_score: 0.4646
273
+ 2025-09-27 15:47:26,278 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 98/100 | Train Loss: 0.0224 | Val rms_score: 0.4688
274
+ 2025-09-27 15:47:28,681 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 99/100 | Train Loss: 0.0250 | Val rms_score: 0.4660
275
+ 2025-09-27 15:47:30,703 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 100/100 | Train Loss: 0.0206 | Val rms_score: 0.4666
276
+ 2025-09-27 15:47:31,167 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Test rms_score: 0.7191
277
+ 2025-09-27 15:47:31,562 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Starting triplicate run 3 for dataset adme_ppb_r at 2025-09-27_15-47-31
278
+ 2025-09-27 15:47:33,384 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 1/100 | Train Loss: 1.3000 | Val rms_score: 0.6557
279
+ 2025-09-27 15:47:33,384 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Global step of best model: 5
280
+ 2025-09-27 15:47:34,317 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Best model saved at epoch 1 with val rms_score: 0.6557
281
+ 2025-09-27 15:47:40,003 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 2/100 | Train Loss: 1.1625 | Val rms_score: 0.6118
282
+ 2025-09-27 15:47:40,183 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Global step of best model: 10
283
+ 2025-09-27 15:47:40,921 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Best model saved at epoch 2 with val rms_score: 0.6118
284
+ 2025-09-27 15:47:43,402 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 3/100 | Train Loss: 0.9437 | Val rms_score: 0.5727
285
+ 2025-09-27 15:47:43,593 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Global step of best model: 15
286
+ 2025-09-27 15:47:44,205 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Best model saved at epoch 3 with val rms_score: 0.5727
287
+ 2025-09-27 15:47:46,426 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 4/100 | Train Loss: 0.7719 | Val rms_score: 0.5478
288
+ 2025-09-27 15:47:46,620 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Global step of best model: 20
289
+ 2025-09-27 15:47:48,463 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Best model saved at epoch 4 with val rms_score: 0.5478
290
+ 2025-09-27 15:47:50,685 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 5/100 | Train Loss: 0.7719 | Val rms_score: 0.5298
291
+ 2025-09-27 15:47:50,879 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Global step of best model: 25
292
+ 2025-09-27 15:47:51,492 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Best model saved at epoch 5 with val rms_score: 0.5298
293
+ 2025-09-27 15:47:53,652 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 6/100 | Train Loss: 0.7250 | Val rms_score: 0.5118
294
+ 2025-09-27 15:47:54,203 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Global step of best model: 30
295
+ 2025-09-27 15:47:54,817 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Best model saved at epoch 6 with val rms_score: 0.5118
296
+ 2025-09-27 15:47:56,975 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 7/100 | Train Loss: 0.6062 | Val rms_score: 0.5010
297
+ 2025-09-27 15:47:57,168 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Global step of best model: 35
298
+ 2025-09-27 15:47:57,784 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Best model saved at epoch 7 with val rms_score: 0.5010
299
+ 2025-09-27 15:47:59,935 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 8/100 | Train Loss: 0.6438 | Val rms_score: 0.4803
300
+ 2025-09-27 15:48:00,132 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Global step of best model: 40
301
+ 2025-09-27 15:48:00,740 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Best model saved at epoch 8 with val rms_score: 0.4803
302
+ 2025-09-27 15:48:02,907 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 9/100 | Train Loss: 0.4969 | Val rms_score: 0.4657
303
+ 2025-09-27 15:48:03,102 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Global step of best model: 45
304
+ 2025-09-27 15:48:03,720 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Best model saved at epoch 9 with val rms_score: 0.4657
305
+ 2025-09-27 15:48:05,919 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 10/100 | Train Loss: 0.4313 | Val rms_score: 0.4462
306
+ 2025-09-27 15:48:06,127 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Global step of best model: 50
307
+ 2025-09-27 15:48:06,731 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Best model saved at epoch 10 with val rms_score: 0.4462
308
+ 2025-09-27 15:48:08,863 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 11/100 | Train Loss: 0.4156 | Val rms_score: 0.4408
309
+ 2025-09-27 15:48:09,397 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Global step of best model: 55
310
+ 2025-09-27 15:48:10,011 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Best model saved at epoch 11 with val rms_score: 0.4408
311
+ 2025-09-27 15:48:12,189 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 12/100 | Train Loss: 0.4875 | Val rms_score: 0.4483
312
+ 2025-09-27 15:48:14,517 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 13/100 | Train Loss: 0.3391 | Val rms_score: 0.4689
313
+ 2025-09-27 15:48:16,560 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 14/100 | Train Loss: 0.3063 | Val rms_score: 0.4486
314
+ 2025-09-27 15:48:18,561 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 15/100 | Train Loss: 0.2719 | Val rms_score: 0.4362
315
+ 2025-09-27 15:48:18,753 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Global step of best model: 75
316
+ 2025-09-27 15:48:19,360 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Best model saved at epoch 15 with val rms_score: 0.4362
317
+ 2025-09-27 15:48:21,535 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 16/100 | Train Loss: 0.2922 | Val rms_score: 0.4312
318
+ 2025-09-27 15:48:22,064 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Global step of best model: 80
319
+ 2025-09-27 15:48:22,668 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Best model saved at epoch 16 with val rms_score: 0.4312
320
+ 2025-09-27 15:48:24,916 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 17/100 | Train Loss: 0.2500 | Val rms_score: 0.4323
321
+ 2025-09-27 15:48:26,910 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 18/100 | Train Loss: 0.2234 | Val rms_score: 0.4223
322
+ 2025-09-27 15:48:27,104 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Global step of best model: 90
323
+ 2025-09-27 15:48:27,727 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Best model saved at epoch 18 with val rms_score: 0.4223
324
+ 2025-09-27 15:48:29,856 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 19/100 | Train Loss: 0.2125 | Val rms_score: 0.4053
325
+ 2025-09-27 15:48:30,053 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Global step of best model: 95
326
+ 2025-09-27 15:48:30,652 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Best model saved at epoch 19 with val rms_score: 0.4053
327
+ 2025-09-27 15:48:32,796 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 20/100 | Train Loss: 0.2062 | Val rms_score: 0.3949
328
+ 2025-09-27 15:48:32,995 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Global step of best model: 100
329
+ 2025-09-27 15:48:33,609 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Best model saved at epoch 20 with val rms_score: 0.3949
330
+ 2025-09-27 15:48:35,715 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 21/100 | Train Loss: 0.1945 | Val rms_score: 0.3882
331
+ 2025-09-27 15:48:36,281 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Global step of best model: 105
332
+ 2025-09-27 15:48:36,887 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Best model saved at epoch 21 with val rms_score: 0.3882
333
+ 2025-09-27 15:48:39,017 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 22/100 | Train Loss: 0.1938 | Val rms_score: 0.3903
334
+ 2025-09-27 15:48:41,014 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 23/100 | Train Loss: 0.2625 | Val rms_score: 0.3970
335
+ 2025-09-27 15:48:43,058 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 24/100 | Train Loss: 0.1781 | Val rms_score: 0.4072
336
+ 2025-09-27 15:48:45,075 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 25/100 | Train Loss: 0.1828 | Val rms_score: 0.4234
337
+ 2025-09-27 15:48:47,088 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 26/100 | Train Loss: 0.1625 | Val rms_score: 0.3966
338
+ 2025-09-27 15:48:49,476 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 27/100 | Train Loss: 0.1570 | Val rms_score: 0.3757
339
+ 2025-09-27 15:48:49,766 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Global step of best model: 135
340
+ 2025-09-27 15:48:50,573 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Best model saved at epoch 27 with val rms_score: 0.3757
341
+ 2025-09-27 15:48:52,733 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 28/100 | Train Loss: 0.1406 | Val rms_score: 0.3686
342
+ 2025-09-27 15:48:52,927 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Global step of best model: 140
343
+ 2025-09-27 15:48:53,535 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Best model saved at epoch 28 with val rms_score: 0.3686
344
+ 2025-09-27 15:48:55,649 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 29/100 | Train Loss: 0.1539 | Val rms_score: 0.3695
345
+ 2025-09-27 15:48:57,670 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 30/100 | Train Loss: 0.1328 | Val rms_score: 0.3904
346
+ 2025-09-27 15:48:59,753 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 31/100 | Train Loss: 0.1344 | Val rms_score: 0.4072
347
+ 2025-09-27 15:49:02,198 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 32/100 | Train Loss: 0.1711 | Val rms_score: 0.3950
348
+ 2025-09-27 15:49:04,593 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 33/100 | Train Loss: 0.1313 | Val rms_score: 0.3712
349
+ 2025-09-27 15:49:06,720 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 34/100 | Train Loss: 0.1828 | Val rms_score: 0.3632
350
+ 2025-09-27 15:49:06,913 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Global step of best model: 170
351
+ 2025-09-27 15:49:07,542 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Best model saved at epoch 34 with val rms_score: 0.3632
352
+ 2025-09-27 15:49:09,722 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 35/100 | Train Loss: 0.1609 | Val rms_score: 0.3592
353
+ 2025-09-27 15:49:09,921 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Global step of best model: 175
354
+ 2025-09-27 15:49:10,545 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Best model saved at epoch 35 with val rms_score: 0.3592
355
+ 2025-09-27 15:49:12,742 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 36/100 | Train Loss: 0.1375 | Val rms_score: 0.3709
356
+ 2025-09-27 15:49:15,112 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 37/100 | Train Loss: 0.1516 | Val rms_score: 0.3918
357
+ 2025-09-27 15:49:17,136 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 38/100 | Train Loss: 0.1250 | Val rms_score: 0.4052
358
+ 2025-09-27 15:49:19,220 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 39/100 | Train Loss: 0.0973 | Val rms_score: 0.4057
359
+ 2025-09-27 15:49:21,373 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 40/100 | Train Loss: 0.1016 | Val rms_score: 0.4063
360
+ 2025-09-27 15:49:23,396 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 41/100 | Train Loss: 0.0973 | Val rms_score: 0.4100
361
+ 2025-09-27 15:49:26,013 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 42/100 | Train Loss: 0.0879 | Val rms_score: 0.4006
362
+ 2025-09-27 15:49:28,084 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 43/100 | Train Loss: 0.0875 | Val rms_score: 0.3991
363
+ 2025-09-27 15:49:30,207 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 44/100 | Train Loss: 0.0766 | Val rms_score: 0.4010
364
+ 2025-09-27 15:49:32,398 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 45/100 | Train Loss: 0.1250 | Val rms_score: 0.3947
365
+ 2025-09-27 15:49:34,867 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 46/100 | Train Loss: 0.0727 | Val rms_score: 0.3792
366
+ 2025-09-27 15:49:37,285 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 47/100 | Train Loss: 0.0734 | Val rms_score: 0.3962
367
+ 2025-09-27 15:49:39,335 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 48/100 | Train Loss: 0.0750 | Val rms_score: 0.4244
368
+ 2025-09-27 15:49:41,429 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 49/100 | Train Loss: 0.0633 | Val rms_score: 0.4427
369
+ 2025-09-27 15:49:43,506 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 50/100 | Train Loss: 0.0797 | Val rms_score: 0.4384
370
+ 2025-09-27 15:49:45,687 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 51/100 | Train Loss: 0.0641 | Val rms_score: 0.4302
371
+ 2025-09-27 15:49:48,069 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 52/100 | Train Loss: 0.0613 | Val rms_score: 0.4203
372
+ 2025-09-27 15:49:50,139 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 53/100 | Train Loss: 0.0512 | Val rms_score: 0.4184
373
+ 2025-09-27 15:49:52,190 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 54/100 | Train Loss: 0.0574 | Val rms_score: 0.4209
374
+ 2025-09-27 15:49:54,401 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 55/100 | Train Loss: 0.0494 | Val rms_score: 0.4270
375
+ 2025-09-27 15:49:56,404 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 56/100 | Train Loss: 0.0434 | Val rms_score: 0.4289
376
+ 2025-09-27 15:49:58,755 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 57/100 | Train Loss: 0.0555 | Val rms_score: 0.4310
377
+ 2025-09-27 15:50:00,779 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 58/100 | Train Loss: 0.0492 | Val rms_score: 0.4375
378
+ 2025-09-27 15:50:02,776 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 59/100 | Train Loss: 0.0492 | Val rms_score: 0.4323
379
+ 2025-09-27 15:50:04,917 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 60/100 | Train Loss: 0.0531 | Val rms_score: 0.4157
380
+ 2025-09-27 15:50:06,940 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 61/100 | Train Loss: 0.0531 | Val rms_score: 0.4095
381
+ 2025-09-27 15:50:09,387 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 62/100 | Train Loss: 0.0441 | Val rms_score: 0.4117
382
+ 2025-09-27 15:50:11,753 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 63/100 | Train Loss: 0.0375 | Val rms_score: 0.4192
383
+ 2025-09-27 15:50:13,752 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 64/100 | Train Loss: 0.0492 | Val rms_score: 0.4196
384
+ 2025-09-27 15:50:15,741 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 65/100 | Train Loss: 0.0418 | Val rms_score: 0.4210
385
+ 2025-09-27 15:50:17,777 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 66/100 | Train Loss: 0.0480 | Val rms_score: 0.4243
386
+ 2025-09-27 15:50:20,178 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 67/100 | Train Loss: 0.0422 | Val rms_score: 0.4308
387
+ 2025-09-27 15:50:22,391 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 68/100 | Train Loss: 0.0389 | Val rms_score: 0.4433
388
+ 2025-09-27 15:50:24,414 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 69/100 | Train Loss: 0.0350 | Val rms_score: 0.4398
389
+ 2025-09-27 15:50:26,477 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 70/100 | Train Loss: 0.0441 | Val rms_score: 0.4369
390
+ 2025-09-27 15:50:28,549 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 71/100 | Train Loss: 0.0357 | Val rms_score: 0.4431
391
+ 2025-09-27 15:50:31,207 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 72/100 | Train Loss: 0.0352 | Val rms_score: 0.4377
392
+ 2025-09-27 15:50:33,266 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 73/100 | Train Loss: 0.0395 | Val rms_score: 0.4389
393
+ 2025-09-27 15:50:35,343 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 74/100 | Train Loss: 0.0303 | Val rms_score: 0.4437
394
+ 2025-09-27 15:50:37,564 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 75/100 | Train Loss: 0.0406 | Val rms_score: 0.4466
395
+ 2025-09-27 15:50:39,706 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 76/100 | Train Loss: 0.0383 | Val rms_score: 0.4560
396
+ 2025-09-27 15:50:42,248 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 77/100 | Train Loss: 0.0314 | Val rms_score: 0.4580
397
+ 2025-09-27 15:50:44,278 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 78/100 | Train Loss: 0.0297 | Val rms_score: 0.4521
398
+ 2025-09-27 15:50:47,712 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 79/100 | Train Loss: 0.0305 | Val rms_score: 0.4465
399
+ 2025-09-27 15:50:49,940 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 80/100 | Train Loss: 0.0297 | Val rms_score: 0.4499
400
+ 2025-09-27 15:50:51,980 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 81/100 | Train Loss: 0.0332 | Val rms_score: 0.4675
401
+ 2025-09-27 15:50:54,362 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 82/100 | Train Loss: 0.0303 | Val rms_score: 0.4770
402
+ 2025-09-27 15:50:56,419 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 83/100 | Train Loss: 0.0385 | Val rms_score: 0.4738
403
+ 2025-09-27 15:50:58,478 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 84/100 | Train Loss: 0.0357 | Val rms_score: 0.4543
404
+ 2025-09-27 15:51:00,578 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 85/100 | Train Loss: 0.0254 | Val rms_score: 0.4546
405
+ 2025-09-27 15:51:02,613 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 86/100 | Train Loss: 0.0312 | Val rms_score: 0.4599
406
+ 2025-09-27 15:51:05,054 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 87/100 | Train Loss: 0.0279 | Val rms_score: 0.4679
407
+ 2025-09-27 15:51:07,079 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 88/100 | Train Loss: 0.0283 | Val rms_score: 0.4650
408
+ 2025-09-27 15:51:09,112 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 89/100 | Train Loss: 0.0241 | Val rms_score: 0.4612
409
+ 2025-09-27 15:51:11,257 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 90/100 | Train Loss: 0.0336 | Val rms_score: 0.4645
410
+ 2025-09-27 15:51:13,294 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 91/100 | Train Loss: 0.0230 | Val rms_score: 0.4691
411
+ 2025-09-27 15:51:15,749 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 92/100 | Train Loss: 0.0245 | Val rms_score: 0.4766
412
+ 2025-09-27 15:51:17,778 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 93/100 | Train Loss: 0.0195 | Val rms_score: 0.4807
413
+ 2025-09-27 15:51:19,817 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 94/100 | Train Loss: 0.0224 | Val rms_score: 0.4767
414
+ 2025-09-27 15:51:21,950 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 95/100 | Train Loss: 0.0227 | Val rms_score: 0.4739
415
+ 2025-09-27 15:51:23,970 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 96/100 | Train Loss: 0.0275 | Val rms_score: 0.4718
416
+ 2025-09-27 15:51:26,378 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 97/100 | Train Loss: 0.0245 | Val rms_score: 0.4708
417
+ 2025-09-27 15:51:28,398 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 98/100 | Train Loss: 0.0226 | Val rms_score: 0.4682
418
+ 2025-09-27 15:51:30,460 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 99/100 | Train Loss: 0.0217 | Val rms_score: 0.4703
419
+ 2025-09-27 15:51:32,480 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 100/100 | Train Loss: 0.0238 | Val rms_score: 0.4725
420
+ 2025-09-27 15:51:32,954 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Test rms_score: 0.7220
421
+ 2025-09-27 15:51:33,333 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Final Triplicate Test Results — Avg rms_score: 0.7268, Std Dev: 0.0090
logs_modchembert_regression_ModChemBERT-MLM-DAPT-TAFT-OPT/modchembert_deepchem_splits_run_adme_solubility_epochs100_batch_size32_20250927_162635.log ADDED
@@ -0,0 +1,329 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ 2025-09-27 16:26:35,763 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Running benchmark for dataset: adme_solubility
2
+ 2025-09-27 16:26:35,764 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - dataset: adme_solubility, tasks: ['y'], epochs: 100, learning rate: 3e-05, transform: True
3
+ 2025-09-27 16:26:35,768 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Starting triplicate run 1 for dataset adme_solubility at 2025-09-27_16-26-35
4
+ 2025-09-27 16:26:39,589 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 1/100 | Train Loss: 0.7727 | Val rms_score: 0.3946
5
+ 2025-09-27 16:26:39,590 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Global step of best model: 55
6
+ 2025-09-27 16:26:40,176 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 1 with val rms_score: 0.3946
7
+ 2025-09-27 16:26:44,843 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 2/100 | Train Loss: 0.4688 | Val rms_score: 0.4422
8
+ 2025-09-27 16:26:49,673 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 3/100 | Train Loss: 0.4318 | Val rms_score: 0.4071
9
+ 2025-09-27 16:26:54,628 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 4/100 | Train Loss: 0.3234 | Val rms_score: 0.4604
10
+ 2025-09-27 16:26:59,458 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 5/100 | Train Loss: 0.2705 | Val rms_score: 0.3709
11
+ 2025-09-27 16:26:59,611 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Global step of best model: 275
12
+ 2025-09-27 16:27:00,184 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 5 with val rms_score: 0.3709
13
+ 2025-09-27 16:27:04,996 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 6/100 | Train Loss: 0.1708 | Val rms_score: 0.3845
14
+ 2025-09-27 16:27:09,915 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 7/100 | Train Loss: 0.1250 | Val rms_score: 0.4022
15
+ 2025-09-27 16:27:14,504 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 8/100 | Train Loss: 0.1031 | Val rms_score: 0.3912
16
+ 2025-09-27 16:27:19,031 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 9/100 | Train Loss: 0.0875 | Val rms_score: 0.4061
17
+ 2025-09-27 16:27:23,498 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 10/100 | Train Loss: 0.0700 | Val rms_score: 0.3957
18
+ 2025-09-27 16:27:28,107 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 11/100 | Train Loss: 0.0520 | Val rms_score: 0.3871
19
+ 2025-09-27 16:27:32,984 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 12/100 | Train Loss: 0.0557 | Val rms_score: 0.3893
20
+ 2025-09-27 16:27:38,370 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 13/100 | Train Loss: 0.0479 | Val rms_score: 0.3948
21
+ 2025-09-27 16:27:43,172 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 14/100 | Train Loss: 0.0415 | Val rms_score: 0.3937
22
+ 2025-09-27 16:27:48,147 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 15/100 | Train Loss: 0.0441 | Val rms_score: 0.3792
23
+ 2025-09-27 16:27:52,865 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 16/100 | Train Loss: 0.0398 | Val rms_score: 0.3994
24
+ 2025-09-27 16:27:57,849 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 17/100 | Train Loss: 0.0404 | Val rms_score: 0.3986
25
+ 2025-09-27 16:28:02,408 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 18/100 | Train Loss: 0.0362 | Val rms_score: 0.4043
26
+ 2025-09-27 16:28:07,828 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 19/100 | Train Loss: 0.0323 | Val rms_score: 0.3964
27
+ 2025-09-27 16:28:12,509 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 20/100 | Train Loss: 0.0277 | Val rms_score: 0.3893
28
+ 2025-09-27 16:28:17,175 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 21/100 | Train Loss: 0.0277 | Val rms_score: 0.3965
29
+ 2025-09-27 16:28:22,560 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 22/100 | Train Loss: 0.0301 | Val rms_score: 0.3807
30
+ 2025-09-27 16:28:27,635 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 23/100 | Train Loss: 0.0283 | Val rms_score: 0.3983
31
+ 2025-09-27 16:28:32,622 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 24/100 | Train Loss: 0.0270 | Val rms_score: 0.3987
32
+ 2025-09-27 16:28:37,161 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 25/100 | Train Loss: 0.0273 | Val rms_score: 0.3900
33
+ 2025-09-27 16:28:41,758 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 26/100 | Train Loss: 0.0297 | Val rms_score: 0.3896
34
+ 2025-09-27 16:28:46,687 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 27/100 | Train Loss: 0.0278 | Val rms_score: 0.3882
35
+ 2025-09-27 16:28:51,179 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 28/100 | Train Loss: 0.0287 | Val rms_score: 0.4082
36
+ 2025-09-27 16:28:55,744 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 29/100 | Train Loss: 0.0263 | Val rms_score: 0.3865
37
+ 2025-09-27 16:29:00,340 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 30/100 | Train Loss: 0.0255 | Val rms_score: 0.3921
38
+ 2025-09-27 16:29:05,026 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 31/100 | Train Loss: 0.0173 | Val rms_score: 0.3910
39
+ 2025-09-27 16:29:10,410 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 32/100 | Train Loss: 0.0216 | Val rms_score: 0.3948
40
+ 2025-09-27 16:29:15,014 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 33/100 | Train Loss: 0.0285 | Val rms_score: 0.3808
41
+ 2025-09-27 16:29:20,094 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 34/100 | Train Loss: 0.0207 | Val rms_score: 0.3847
42
+ 2025-09-27 16:29:24,663 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 35/100 | Train Loss: 0.0230 | Val rms_score: 0.3925
43
+ 2025-09-27 16:29:29,266 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 36/100 | Train Loss: 0.0202 | Val rms_score: 0.3845
44
+ 2025-09-27 16:29:35,350 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 37/100 | Train Loss: 0.0194 | Val rms_score: 0.3962
45
+ 2025-09-27 16:29:39,862 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 38/100 | Train Loss: 0.0205 | Val rms_score: 0.4036
46
+ 2025-09-27 16:29:44,352 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 39/100 | Train Loss: 0.0194 | Val rms_score: 0.3893
47
+ 2025-09-27 16:29:48,945 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 40/100 | Train Loss: 0.0186 | Val rms_score: 0.3991
48
+ 2025-09-27 16:29:53,711 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 41/100 | Train Loss: 0.0182 | Val rms_score: 0.3919
49
+ 2025-09-27 16:29:58,907 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 42/100 | Train Loss: 0.0205 | Val rms_score: 0.3910
50
+ 2025-09-27 16:30:03,854 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 43/100 | Train Loss: 0.0163 | Val rms_score: 0.3922
51
+ 2025-09-27 16:30:08,531 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 44/100 | Train Loss: 0.0186 | Val rms_score: 0.3915
52
+ 2025-09-27 16:30:13,081 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 45/100 | Train Loss: 0.0175 | Val rms_score: 0.3972
53
+ 2025-09-27 16:30:17,624 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 46/100 | Train Loss: 0.0182 | Val rms_score: 0.3967
54
+ 2025-09-27 16:30:22,797 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 47/100 | Train Loss: 0.0166 | Val rms_score: 0.3880
55
+ 2025-09-27 16:30:27,366 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 48/100 | Train Loss: 0.0151 | Val rms_score: 0.3987
56
+ 2025-09-27 16:30:32,023 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 49/100 | Train Loss: 0.0163 | Val rms_score: 0.3892
57
+ 2025-09-27 16:30:36,542 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 50/100 | Train Loss: 0.0158 | Val rms_score: 0.4009
58
+ 2025-09-27 16:30:41,521 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 51/100 | Train Loss: 0.0179 | Val rms_score: 0.3916
59
+ 2025-09-27 16:30:46,719 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 52/100 | Train Loss: 0.0143 | Val rms_score: 0.3948
60
+ 2025-09-27 16:30:51,607 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 53/100 | Train Loss: 0.0141 | Val rms_score: 0.3854
61
+ 2025-09-27 16:30:56,104 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 54/100 | Train Loss: 0.0158 | Val rms_score: 0.3907
62
+ 2025-09-27 16:31:01,553 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 55/100 | Train Loss: 0.0149 | Val rms_score: 0.3875
63
+ 2025-09-27 16:31:06,093 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 56/100 | Train Loss: 0.0149 | Val rms_score: 0.3876
64
+ 2025-09-27 16:31:11,050 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 57/100 | Train Loss: 0.0152 | Val rms_score: 0.4067
65
+ 2025-09-27 16:31:15,663 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 58/100 | Train Loss: 0.0148 | Val rms_score: 0.3884
66
+ 2025-09-27 16:31:20,228 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 59/100 | Train Loss: 0.0148 | Val rms_score: 0.3896
67
+ 2025-09-27 16:31:25,082 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 60/100 | Train Loss: 0.0136 | Val rms_score: 0.4014
68
+ 2025-09-27 16:31:29,935 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 61/100 | Train Loss: 0.0137 | Val rms_score: 0.3935
69
+ 2025-09-27 16:31:35,660 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 62/100 | Train Loss: 0.0163 | Val rms_score: 0.3879
70
+ 2025-09-27 16:31:40,226 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 63/100 | Train Loss: 0.0137 | Val rms_score: 0.3945
71
+ 2025-09-27 16:31:44,721 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 64/100 | Train Loss: 0.0131 | Val rms_score: 0.4010
72
+ 2025-09-27 16:31:49,243 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 65/100 | Train Loss: 0.0141 | Val rms_score: 0.3904
73
+ 2025-09-27 16:31:53,751 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 66/100 | Train Loss: 0.0133 | Val rms_score: 0.3976
74
+ 2025-09-27 16:31:58,730 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 67/100 | Train Loss: 0.0132 | Val rms_score: 0.3902
75
+ 2025-09-27 16:32:03,191 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 68/100 | Train Loss: 0.0137 | Val rms_score: 0.3945
76
+ 2025-09-27 16:32:07,764 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 69/100 | Train Loss: 0.0127 | Val rms_score: 0.3990
77
+ 2025-09-27 16:32:12,761 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 70/100 | Train Loss: 0.0122 | Val rms_score: 0.3911
78
+ 2025-09-27 16:32:17,995 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 71/100 | Train Loss: 0.0130 | Val rms_score: 0.3963
79
+ 2025-09-27 16:32:23,198 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 72/100 | Train Loss: 0.0135 | Val rms_score: 0.3911
80
+ 2025-09-27 16:32:28,638 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 73/100 | Train Loss: 0.0143 | Val rms_score: 0.3951
81
+ 2025-09-27 16:32:33,176 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 74/100 | Train Loss: 0.0127 | Val rms_score: 0.3935
82
+ 2025-09-27 16:32:37,683 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 75/100 | Train Loss: 0.0116 | Val rms_score: 0.3938
83
+ 2025-09-27 16:32:42,276 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 76/100 | Train Loss: 0.0124 | Val rms_score: 0.3950
84
+ 2025-09-27 16:32:47,362 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 77/100 | Train Loss: 0.0121 | Val rms_score: 0.3955
85
+ 2025-09-27 16:32:52,067 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 78/100 | Train Loss: 0.0127 | Val rms_score: 0.3988
86
+ 2025-09-27 16:32:56,951 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 79/100 | Train Loss: 0.0120 | Val rms_score: 0.4006
87
+ 2025-09-27 16:33:01,864 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 80/100 | Train Loss: 0.0116 | Val rms_score: 0.3961
88
+ 2025-09-27 16:33:06,613 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 81/100 | Train Loss: 0.0112 | Val rms_score: 0.3958
89
+ 2025-09-27 16:33:11,979 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 82/100 | Train Loss: 0.0103 | Val rms_score: 0.3954
90
+ 2025-09-27 16:33:16,586 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 83/100 | Train Loss: 0.0114 | Val rms_score: 0.3970
91
+ 2025-09-27 16:33:21,169 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 84/100 | Train Loss: 0.0122 | Val rms_score: 0.3882
92
+ 2025-09-27 16:33:25,673 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 85/100 | Train Loss: 0.0122 | Val rms_score: 0.3976
93
+ 2025-09-27 16:33:30,138 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 86/100 | Train Loss: 0.0110 | Val rms_score: 0.3879
94
+ 2025-09-27 16:33:35,161 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 87/100 | Train Loss: 0.0119 | Val rms_score: 0.3939
95
+ 2025-09-27 16:33:39,601 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 88/100 | Train Loss: 0.0122 | Val rms_score: 0.3871
96
+ 2025-09-27 16:33:44,393 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 89/100 | Train Loss: 0.0109 | Val rms_score: 0.3982
97
+ 2025-09-27 16:33:49,354 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 90/100 | Train Loss: 0.0106 | Val rms_score: 0.3927
98
+ 2025-09-27 16:33:55,094 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 91/100 | Train Loss: 0.0117 | Val rms_score: 0.3950
99
+ 2025-09-27 16:34:00,682 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 92/100 | Train Loss: 0.0112 | Val rms_score: 0.3893
100
+ 2025-09-27 16:34:05,313 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 93/100 | Train Loss: 0.0124 | Val rms_score: 0.3891
101
+ 2025-09-27 16:34:09,893 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 94/100 | Train Loss: 0.0115 | Val rms_score: 0.3942
102
+ 2025-09-27 16:34:14,485 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 95/100 | Train Loss: 0.0117 | Val rms_score: 0.3854
103
+ 2025-09-27 16:34:19,076 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 96/100 | Train Loss: 0.0109 | Val rms_score: 0.3938
104
+ 2025-09-27 16:34:24,203 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 97/100 | Train Loss: 0.0108 | Val rms_score: 0.3906
105
+ 2025-09-27 16:34:28,919 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 98/100 | Train Loss: 0.0112 | Val rms_score: 0.3908
106
+ 2025-09-27 16:34:34,028 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 99/100 | Train Loss: 0.0114 | Val rms_score: 0.3899
107
+ 2025-09-27 16:34:38,940 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 100/100 | Train Loss: 0.0105 | Val rms_score: 0.3903
108
+ 2025-09-27 16:34:39,331 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Test rms_score: 0.4741
109
+ 2025-09-27 16:34:39,785 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Starting triplicate run 2 for dataset adme_solubility at 2025-09-27_16-34-39
110
+ 2025-09-27 16:34:43,857 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 1/100 | Train Loss: 0.7409 | Val rms_score: 0.4453
111
+ 2025-09-27 16:34:43,858 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Global step of best model: 55
112
+ 2025-09-27 16:34:44,467 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 1 with val rms_score: 0.4453
113
+ 2025-09-27 16:34:49,489 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 2/100 | Train Loss: 0.5250 | Val rms_score: 0.3950
114
+ 2025-09-27 16:34:49,676 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Global step of best model: 110
115
+ 2025-09-27 16:34:50,303 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 2 with val rms_score: 0.3950
116
+ 2025-09-27 16:34:55,017 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 3/100 | Train Loss: 0.4136 | Val rms_score: 0.3639
117
+ 2025-09-27 16:34:55,206 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Global step of best model: 165
118
+ 2025-09-27 16:34:55,792 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 3 with val rms_score: 0.3639
119
+ 2025-09-27 16:35:00,663 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 4/100 | Train Loss: 0.3719 | Val rms_score: 0.4396
120
+ 2025-09-27 16:35:05,098 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 5/100 | Train Loss: 0.2614 | Val rms_score: 0.3572
121
+ 2025-09-27 16:35:05,289 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Global step of best model: 275
122
+ 2025-09-27 16:35:05,869 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 5 with val rms_score: 0.3572
123
+ 2025-09-27 16:35:10,774 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 6/100 | Train Loss: 0.2000 | Val rms_score: 0.4394
124
+ 2025-09-27 16:35:15,983 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 7/100 | Train Loss: 0.1273 | Val rms_score: 0.3950
125
+ 2025-09-27 16:35:21,319 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 8/100 | Train Loss: 0.0969 | Val rms_score: 0.4124
126
+ 2025-09-27 16:35:26,565 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 9/100 | Train Loss: 0.0773 | Val rms_score: 0.3908
127
+ 2025-09-27 16:35:31,205 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 10/100 | Train Loss: 0.0653 | Val rms_score: 0.3919
128
+ 2025-09-27 16:35:35,723 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 11/100 | Train Loss: 0.0684 | Val rms_score: 0.3909
129
+ 2025-09-27 16:35:40,641 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 12/100 | Train Loss: 0.0531 | Val rms_score: 0.3972
130
+ 2025-09-27 16:35:45,318 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 13/100 | Train Loss: 0.0484 | Val rms_score: 0.3891
131
+ 2025-09-27 16:35:50,313 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 14/100 | Train Loss: 0.0426 | Val rms_score: 0.3912
132
+ 2025-09-27 16:35:54,995 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 15/100 | Train Loss: 0.0422 | Val rms_score: 0.3851
133
+ 2025-09-27 16:35:59,468 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 16/100 | Train Loss: 0.0398 | Val rms_score: 0.3916
134
+ 2025-09-27 16:36:04,952 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 17/100 | Train Loss: 0.0393 | Val rms_score: 0.3871
135
+ 2025-09-27 16:36:09,821 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 18/100 | Train Loss: 0.0338 | Val rms_score: 0.3756
136
+ 2025-09-27 16:36:15,486 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 19/100 | Train Loss: 0.0330 | Val rms_score: 0.3971
137
+ 2025-09-27 16:36:20,258 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 20/100 | Train Loss: 0.0325 | Val rms_score: 0.3792
138
+ 2025-09-27 16:36:25,132 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 21/100 | Train Loss: 0.0294 | Val rms_score: 0.3863
139
+ 2025-09-27 16:36:30,048 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 22/100 | Train Loss: 0.0299 | Val rms_score: 0.3870
140
+ 2025-09-27 16:36:34,581 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 23/100 | Train Loss: 0.0284 | Val rms_score: 0.3887
141
+ 2025-09-27 16:36:39,081 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 24/100 | Train Loss: 0.0245 | Val rms_score: 0.3889
142
+ 2025-09-27 16:36:43,656 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 25/100 | Train Loss: 0.0256 | Val rms_score: 0.3892
143
+ 2025-09-27 16:36:48,528 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 26/100 | Train Loss: 0.0249 | Val rms_score: 0.4017
144
+ 2025-09-27 16:36:53,934 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 27/100 | Train Loss: 0.0273 | Val rms_score: 0.4032
145
+ 2025-09-27 16:36:58,925 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 28/100 | Train Loss: 0.0273 | Val rms_score: 0.3825
146
+ 2025-09-27 16:37:03,591 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 29/100 | Train Loss: 0.0244 | Val rms_score: 0.3915
147
+ 2025-09-27 16:37:08,503 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 30/100 | Train Loss: 0.0230 | Val rms_score: 0.3845
148
+ 2025-09-27 16:37:13,077 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 31/100 | Train Loss: 0.0189 | Val rms_score: 0.3897
149
+ 2025-09-27 16:37:18,109 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 32/100 | Train Loss: 0.0226 | Val rms_score: 0.3960
150
+ 2025-09-27 16:37:22,470 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 33/100 | Train Loss: 0.0210 | Val rms_score: 0.3876
151
+ 2025-09-27 16:37:27,003 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 34/100 | Train Loss: 0.0206 | Val rms_score: 0.3824
152
+ 2025-09-27 16:37:31,418 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 35/100 | Train Loss: 0.0217 | Val rms_score: 0.3854
153
+ 2025-09-27 16:37:36,324 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 36/100 | Train Loss: 0.0237 | Val rms_score: 0.3845
154
+ 2025-09-27 16:37:42,472 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 37/100 | Train Loss: 0.0202 | Val rms_score: 0.3859
155
+ 2025-09-27 16:37:47,254 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 38/100 | Train Loss: 0.0202 | Val rms_score: 0.3930
156
+ 2025-09-27 16:37:51,824 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 39/100 | Train Loss: 0.0192 | Val rms_score: 0.3871
157
+ 2025-09-27 16:37:56,336 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 40/100 | Train Loss: 0.0182 | Val rms_score: 0.3858
158
+ 2025-09-27 16:38:00,826 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 41/100 | Train Loss: 0.0173 | Val rms_score: 0.3855
159
+ 2025-09-27 16:38:05,848 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 42/100 | Train Loss: 0.0195 | Val rms_score: 0.3804
160
+ 2025-09-27 16:38:10,366 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 43/100 | Train Loss: 0.0172 | Val rms_score: 0.3794
161
+ 2025-09-27 16:38:15,035 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 44/100 | Train Loss: 0.0165 | Val rms_score: 0.3903
162
+ 2025-09-27 16:38:19,638 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 45/100 | Train Loss: 0.0170 | Val rms_score: 0.3846
163
+ 2025-09-27 16:38:24,599 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 46/100 | Train Loss: 0.0158 | Val rms_score: 0.3944
164
+ 2025-09-27 16:38:29,920 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 47/100 | Train Loss: 0.0170 | Val rms_score: 0.3844
165
+ 2025-09-27 16:38:34,790 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 48/100 | Train Loss: 0.0174 | Val rms_score: 0.3899
166
+ 2025-09-27 16:38:39,285 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 49/100 | Train Loss: 0.0156 | Val rms_score: 0.3834
167
+ 2025-09-27 16:38:43,850 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 50/100 | Train Loss: 0.0148 | Val rms_score: 0.3850
168
+ 2025-09-27 16:38:48,330 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 51/100 | Train Loss: 0.0135 | Val rms_score: 0.3859
169
+ 2025-09-27 16:38:53,225 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 52/100 | Train Loss: 0.0164 | Val rms_score: 0.3910
170
+ 2025-09-27 16:38:57,959 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 53/100 | Train Loss: 0.0161 | Val rms_score: 0.3885
171
+ 2025-09-27 16:39:02,424 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 54/100 | Train Loss: 0.0165 | Val rms_score: 0.3853
172
+ 2025-09-27 16:39:08,066 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 55/100 | Train Loss: 0.0198 | Val rms_score: 0.3891
173
+ 2025-09-27 16:39:12,651 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 56/100 | Train Loss: 0.0173 | Val rms_score: 0.3828
174
+ 2025-09-27 16:39:18,064 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 57/100 | Train Loss: 0.0160 | Val rms_score: 0.3815
175
+ 2025-09-27 16:39:22,768 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 58/100 | Train Loss: 0.0157 | Val rms_score: 0.3780
176
+ 2025-09-27 16:39:27,834 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 59/100 | Train Loss: 0.0143 | Val rms_score: 0.3844
177
+ 2025-09-27 16:39:32,569 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 60/100 | Train Loss: 0.0135 | Val rms_score: 0.3881
178
+ 2025-09-27 16:39:37,138 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 61/100 | Train Loss: 0.0138 | Val rms_score: 0.3841
179
+ 2025-09-27 16:39:42,070 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 62/100 | Train Loss: 0.0148 | Val rms_score: 0.3915
180
+ 2025-09-27 16:39:46,630 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 63/100 | Train Loss: 0.0138 | Val rms_score: 0.3832
181
+ 2025-09-27 16:39:51,104 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 64/100 | Train Loss: 0.0135 | Val rms_score: 0.3833
182
+ 2025-09-27 16:39:56,197 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 65/100 | Train Loss: 0.0133 | Val rms_score: 0.3848
183
+ 2025-09-27 16:40:01,196 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 66/100 | Train Loss: 0.0136 | Val rms_score: 0.3805
184
+ 2025-09-27 16:40:06,562 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 67/100 | Train Loss: 0.0125 | Val rms_score: 0.3876
185
+ 2025-09-27 16:40:11,161 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 68/100 | Train Loss: 0.0131 | Val rms_score: 0.3857
186
+ 2025-09-27 16:40:15,711 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 69/100 | Train Loss: 0.0134 | Val rms_score: 0.3899
187
+ 2025-09-27 16:40:20,305 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 70/100 | Train Loss: 0.0123 | Val rms_score: 0.3844
188
+ 2025-09-27 16:40:24,741 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 71/100 | Train Loss: 0.0127 | Val rms_score: 0.3808
189
+ 2025-09-27 16:40:29,636 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 72/100 | Train Loss: 0.0125 | Val rms_score: 0.3812
190
+ 2025-09-27 16:40:35,300 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 73/100 | Train Loss: 0.0124 | Val rms_score: 0.3812
191
+ 2025-09-27 16:40:40,103 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 74/100 | Train Loss: 0.0129 | Val rms_score: 0.3816
192
+ 2025-09-27 16:40:44,955 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 75/100 | Train Loss: 0.0125 | Val rms_score: 0.3882
193
+ 2025-09-27 16:40:49,870 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 76/100 | Train Loss: 0.0126 | Val rms_score: 0.3805
194
+ 2025-09-27 16:40:55,397 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 77/100 | Train Loss: 0.0123 | Val rms_score: 0.3854
195
+ 2025-09-27 16:41:00,015 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 78/100 | Train Loss: 0.0129 | Val rms_score: 0.3838
196
+ 2025-09-27 16:41:04,640 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 79/100 | Train Loss: 0.0111 | Val rms_score: 0.3890
197
+ 2025-09-27 16:41:09,202 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 80/100 | Train Loss: 0.0121 | Val rms_score: 0.3827
198
+ 2025-09-27 16:41:13,709 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 81/100 | Train Loss: 0.0120 | Val rms_score: 0.3885
199
+ 2025-09-27 16:41:18,650 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 82/100 | Train Loss: 0.0117 | Val rms_score: 0.3836
200
+ 2025-09-27 16:41:23,359 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 83/100 | Train Loss: 0.0120 | Val rms_score: 0.3872
201
+ 2025-09-27 16:41:27,885 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 84/100 | Train Loss: 0.0115 | Val rms_score: 0.3833
202
+ 2025-09-27 16:41:32,749 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 85/100 | Train Loss: 0.0125 | Val rms_score: 0.3846
203
+ 2025-09-27 16:41:37,693 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 86/100 | Train Loss: 0.0129 | Val rms_score: 0.3838
204
+ 2025-09-27 16:41:43,136 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 87/100 | Train Loss: 0.0120 | Val rms_score: 0.3811
205
+ 2025-09-27 16:41:47,758 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 88/100 | Train Loss: 0.0103 | Val rms_score: 0.3873
206
+ 2025-09-27 16:41:52,334 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 89/100 | Train Loss: 0.0117 | Val rms_score: 0.3905
207
+ 2025-09-27 16:41:56,781 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 90/100 | Train Loss: 0.0105 | Val rms_score: 0.3854
208
+ 2025-09-27 16:42:02,252 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 91/100 | Train Loss: 0.0087 | Val rms_score: 0.3883
209
+ 2025-09-27 16:42:07,340 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 92/100 | Train Loss: 0.0102 | Val rms_score: 0.3845
210
+ 2025-09-27 16:42:11,863 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 93/100 | Train Loss: 0.0100 | Val rms_score: 0.3830
211
+ 2025-09-27 16:42:16,872 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 94/100 | Train Loss: 0.0107 | Val rms_score: 0.3851
212
+ 2025-09-27 16:42:21,588 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 95/100 | Train Loss: 0.0099 | Val rms_score: 0.3892
213
+ 2025-09-27 16:42:26,348 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 96/100 | Train Loss: 0.0122 | Val rms_score: 0.3794
214
+ 2025-09-27 16:42:31,431 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 97/100 | Train Loss: 0.0117 | Val rms_score: 0.3821
215
+ 2025-09-27 16:42:35,990 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 98/100 | Train Loss: 0.0107 | Val rms_score: 0.3853
216
+ 2025-09-27 16:42:40,585 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 99/100 | Train Loss: 0.0102 | Val rms_score: 0.3806
217
+ 2025-09-27 16:42:45,175 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 100/100 | Train Loss: 0.0108 | Val rms_score: 0.3809
218
+ 2025-09-27 16:42:45,641 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Test rms_score: 0.4594
219
+ 2025-09-27 16:42:46,078 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Starting triplicate run 3 for dataset adme_solubility at 2025-09-27_16-42-46
220
+ 2025-09-27 16:42:50,024 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 1/100 | Train Loss: 0.7545 | Val rms_score: 0.4043
221
+ 2025-09-27 16:42:50,024 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Global step of best model: 55
222
+ 2025-09-27 16:42:50,964 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 1 with val rms_score: 0.4043
223
+ 2025-09-27 16:42:57,787 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 2/100 | Train Loss: 0.5594 | Val rms_score: 0.3967
224
+ 2025-09-27 16:42:57,977 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Global step of best model: 110
225
+ 2025-09-27 16:42:58,820 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 2 with val rms_score: 0.3967
226
+ 2025-09-27 16:43:04,484 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 3/100 | Train Loss: 0.4432 | Val rms_score: 0.4769
227
+ 2025-09-27 16:43:10,437 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 4/100 | Train Loss: 0.3156 | Val rms_score: 0.4050
228
+ 2025-09-27 16:43:15,364 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 5/100 | Train Loss: 0.2386 | Val rms_score: 0.3833
229
+ 2025-09-27 16:43:15,550 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Global step of best model: 275
230
+ 2025-09-27 16:43:16,146 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 5 with val rms_score: 0.3833
231
+ 2025-09-27 16:43:21,027 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 6/100 | Train Loss: 0.1604 | Val rms_score: 0.4178
232
+ 2025-09-27 16:43:25,937 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 7/100 | Train Loss: 0.1199 | Val rms_score: 0.3916
233
+ 2025-09-27 16:43:30,521 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 8/100 | Train Loss: 0.1094 | Val rms_score: 0.3907
234
+ 2025-09-27 16:43:35,321 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 9/100 | Train Loss: 0.0869 | Val rms_score: 0.3778
235
+ 2025-09-27 16:43:35,516 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Global step of best model: 495
236
+ 2025-09-27 16:43:36,123 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 9 with val rms_score: 0.3778
237
+ 2025-09-27 16:43:40,730 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 10/100 | Train Loss: 0.0700 | Val rms_score: 0.4206
238
+ 2025-09-27 16:43:45,138 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 11/100 | Train Loss: 0.0754 | Val rms_score: 0.4214
239
+ 2025-09-27 16:43:50,155 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 12/100 | Train Loss: 0.0545 | Val rms_score: 0.4038
240
+ 2025-09-27 16:43:54,926 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 13/100 | Train Loss: 0.0529 | Val rms_score: 0.3901
241
+ 2025-09-27 16:43:59,825 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 14/100 | Train Loss: 0.0418 | Val rms_score: 0.3837
242
+ 2025-09-27 16:44:04,827 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 15/100 | Train Loss: 0.0444 | Val rms_score: 0.3872
243
+ 2025-09-27 16:44:09,852 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 16/100 | Train Loss: 0.0389 | Val rms_score: 0.3898
244
+ 2025-09-27 16:44:14,958 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 17/100 | Train Loss: 0.0368 | Val rms_score: 0.3879
245
+ 2025-09-27 16:44:19,572 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 18/100 | Train Loss: 0.0324 | Val rms_score: 0.3800
246
+ 2025-09-27 16:44:25,253 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 19/100 | Train Loss: 0.0285 | Val rms_score: 0.4085
247
+ 2025-09-27 16:44:29,713 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 20/100 | Train Loss: 0.0283 | Val rms_score: 0.3939
248
+ 2025-09-27 16:44:34,201 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 21/100 | Train Loss: 0.0293 | Val rms_score: 0.4093
249
+ 2025-09-27 16:44:39,128 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 22/100 | Train Loss: 0.0293 | Val rms_score: 0.3845
250
+ 2025-09-27 16:44:43,996 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 23/100 | Train Loss: 0.0303 | Val rms_score: 0.3908
251
+ 2025-09-27 16:44:49,102 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 24/100 | Train Loss: 0.0268 | Val rms_score: 0.3877
252
+ 2025-09-27 16:44:54,058 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 25/100 | Train Loss: 0.0259 | Val rms_score: 0.3990
253
+ 2025-09-27 16:44:58,528 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 26/100 | Train Loss: 0.0242 | Val rms_score: 0.3841
254
+ 2025-09-27 16:45:03,551 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 27/100 | Train Loss: 0.0234 | Val rms_score: 0.3815
255
+ 2025-09-27 16:45:08,138 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 28/100 | Train Loss: 0.0266 | Val rms_score: 0.3834
256
+ 2025-09-27 16:45:12,628 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 29/100 | Train Loss: 0.0249 | Val rms_score: 0.3927
257
+ 2025-09-27 16:45:17,248 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 30/100 | Train Loss: 0.0222 | Val rms_score: 0.3895
258
+ 2025-09-27 16:45:21,713 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 31/100 | Train Loss: 0.0211 | Val rms_score: 0.3911
259
+ 2025-09-27 16:45:26,944 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 32/100 | Train Loss: 0.0206 | Val rms_score: 0.3860
260
+ 2025-09-27 16:45:31,751 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 33/100 | Train Loss: 0.0221 | Val rms_score: 0.3862
261
+ 2025-09-27 16:45:36,658 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 34/100 | Train Loss: 0.0197 | Val rms_score: 0.3960
262
+ 2025-09-27 16:45:41,318 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 35/100 | Train Loss: 0.0191 | Val rms_score: 0.3818
263
+ 2025-09-27 16:45:45,822 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 36/100 | Train Loss: 0.0200 | Val rms_score: 0.3887
264
+ 2025-09-27 16:45:51,769 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 37/100 | Train Loss: 0.0209 | Val rms_score: 0.4003
265
+ 2025-09-27 16:45:56,332 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 38/100 | Train Loss: 0.0213 | Val rms_score: 0.3923
266
+ 2025-09-27 16:46:00,807 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 39/100 | Train Loss: 0.0183 | Val rms_score: 0.3890
267
+ 2025-09-27 16:46:05,309 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 40/100 | Train Loss: 0.0178 | Val rms_score: 0.3905
268
+ 2025-09-27 16:46:09,870 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 41/100 | Train Loss: 0.0182 | Val rms_score: 0.3970
269
+ 2025-09-27 16:46:14,994 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 42/100 | Train Loss: 0.0187 | Val rms_score: 0.3854
270
+ 2025-09-27 16:46:19,757 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 43/100 | Train Loss: 0.0161 | Val rms_score: 0.3927
271
+ 2025-09-27 16:46:24,723 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 44/100 | Train Loss: 0.0153 | Val rms_score: 0.3883
272
+ 2025-09-27 16:46:29,389 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 45/100 | Train Loss: 0.0159 | Val rms_score: 0.3979
273
+ 2025-09-27 16:46:34,132 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 46/100 | Train Loss: 0.0169 | Val rms_score: 0.3975
274
+ 2025-09-27 16:46:39,066 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 47/100 | Train Loss: 0.0161 | Val rms_score: 0.3947
275
+ 2025-09-27 16:46:43,704 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 48/100 | Train Loss: 0.0170 | Val rms_score: 0.3972
276
+ 2025-09-27 16:46:48,188 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 49/100 | Train Loss: 0.0160 | Val rms_score: 0.3945
277
+ 2025-09-27 16:46:52,671 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 50/100 | Train Loss: 0.0211 | Val rms_score: 0.3857
278
+ 2025-09-27 16:46:57,225 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 51/100 | Train Loss: 0.0157 | Val rms_score: 0.3938
279
+ 2025-09-27 16:47:02,440 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 52/100 | Train Loss: 0.0157 | Val rms_score: 0.3844
280
+ 2025-09-27 16:47:07,349 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 53/100 | Train Loss: 0.0130 | Val rms_score: 0.3899
281
+ 2025-09-27 16:47:12,421 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 54/100 | Train Loss: 0.0160 | Val rms_score: 0.3915
282
+ 2025-09-27 16:47:18,006 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 55/100 | Train Loss: 0.0163 | Val rms_score: 0.3915
283
+ 2025-09-27 16:47:22,473 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 56/100 | Train Loss: 0.0146 | Val rms_score: 0.3887
284
+ 2025-09-27 16:47:27,463 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 57/100 | Train Loss: 0.0139 | Val rms_score: 0.3938
285
+ 2025-09-27 16:47:31,968 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 58/100 | Train Loss: 0.0143 | Val rms_score: 0.3900
286
+ 2025-09-27 16:47:36,550 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 59/100 | Train Loss: 0.0141 | Val rms_score: 0.3829
287
+ 2025-09-27 16:47:41,062 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 60/100 | Train Loss: 0.0134 | Val rms_score: 0.3866
288
+ 2025-09-27 16:47:45,608 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 61/100 | Train Loss: 0.0135 | Val rms_score: 0.3949
289
+ 2025-09-27 16:47:50,865 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 62/100 | Train Loss: 0.0115 | Val rms_score: 0.3833
290
+ 2025-09-27 16:47:55,900 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 63/100 | Train Loss: 0.0124 | Val rms_score: 0.3943
291
+ 2025-09-27 16:48:01,117 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 64/100 | Train Loss: 0.0133 | Val rms_score: 0.3870
292
+ 2025-09-27 16:48:05,819 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 65/100 | Train Loss: 0.0138 | Val rms_score: 0.3916
293
+ 2025-09-27 16:48:10,421 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 66/100 | Train Loss: 0.0137 | Val rms_score: 0.3887
294
+ 2025-09-27 16:48:15,394 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 67/100 | Train Loss: 0.0128 | Val rms_score: 0.3839
295
+ 2025-09-27 16:48:19,874 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 68/100 | Train Loss: 0.0128 | Val rms_score: 0.3831
296
+ 2025-09-27 16:48:24,349 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 69/100 | Train Loss: 0.0137 | Val rms_score: 0.3888
297
+ 2025-09-27 16:48:28,892 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 70/100 | Train Loss: 0.0127 | Val rms_score: 0.3881
298
+ 2025-09-27 16:48:33,631 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 71/100 | Train Loss: 0.0123 | Val rms_score: 0.3898
299
+ 2025-09-27 16:48:39,182 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 72/100 | Train Loss: 0.0132 | Val rms_score: 0.3903
300
+ 2025-09-27 16:48:44,643 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 73/100 | Train Loss: 0.0138 | Val rms_score: 0.3921
301
+ 2025-09-27 16:48:49,906 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 74/100 | Train Loss: 0.0120 | Val rms_score: 0.3924
302
+ 2025-09-27 16:48:54,739 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 75/100 | Train Loss: 0.0124 | Val rms_score: 0.3925
303
+ 2025-09-27 16:48:59,181 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 76/100 | Train Loss: 0.0125 | Val rms_score: 0.3884
304
+ 2025-09-27 16:49:04,217 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 77/100 | Train Loss: 0.0122 | Val rms_score: 0.3861
305
+ 2025-09-27 16:49:08,726 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 78/100 | Train Loss: 0.0118 | Val rms_score: 0.3859
306
+ 2025-09-27 16:49:13,460 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 79/100 | Train Loss: 0.0113 | Val rms_score: 0.3885
307
+ 2025-09-27 16:49:17,998 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 80/100 | Train Loss: 0.0121 | Val rms_score: 0.3887
308
+ 2025-09-27 16:49:22,542 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 81/100 | Train Loss: 0.0120 | Val rms_score: 0.3904
309
+ 2025-09-27 16:49:27,933 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 82/100 | Train Loss: 0.0135 | Val rms_score: 0.3898
310
+ 2025-09-27 16:49:32,814 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 83/100 | Train Loss: 0.0112 | Val rms_score: 0.3868
311
+ 2025-09-27 16:49:37,802 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 84/100 | Train Loss: 0.0109 | Val rms_score: 0.3859
312
+ 2025-09-27 16:49:42,300 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 85/100 | Train Loss: 0.0110 | Val rms_score: 0.3877
313
+ 2025-09-27 16:49:46,889 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 86/100 | Train Loss: 0.0117 | Val rms_score: 0.3823
314
+ 2025-09-27 16:49:51,953 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 87/100 | Train Loss: 0.0119 | Val rms_score: 0.3826
315
+ 2025-09-27 16:49:56,481 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 88/100 | Train Loss: 0.0112 | Val rms_score: 0.3856
316
+ 2025-09-27 16:50:01,088 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 89/100 | Train Loss: 0.0108 | Val rms_score: 0.3866
317
+ 2025-09-27 16:50:05,650 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 90/100 | Train Loss: 0.0112 | Val rms_score: 0.3874
318
+ 2025-09-27 16:50:11,254 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 91/100 | Train Loss: 0.0098 | Val rms_score: 0.3874
319
+ 2025-09-27 16:50:16,594 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 92/100 | Train Loss: 0.0121 | Val rms_score: 0.3849
320
+ 2025-09-27 16:50:21,592 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 93/100 | Train Loss: 0.0103 | Val rms_score: 0.3807
321
+ 2025-09-27 16:50:26,376 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 94/100 | Train Loss: 0.0113 | Val rms_score: 0.3894
322
+ 2025-09-27 16:50:30,896 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 95/100 | Train Loss: 0.0115 | Val rms_score: 0.3883
323
+ 2025-09-27 16:50:35,457 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 96/100 | Train Loss: 0.0114 | Val rms_score: 0.3836
324
+ 2025-09-27 16:50:40,583 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 97/100 | Train Loss: 0.0113 | Val rms_score: 0.3914
325
+ 2025-09-27 16:50:45,045 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 98/100 | Train Loss: 0.0106 | Val rms_score: 0.3882
326
+ 2025-09-27 16:50:49,678 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 99/100 | Train Loss: 0.0109 | Val rms_score: 0.3856
327
+ 2025-09-27 16:50:54,128 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 100/100 | Train Loss: 0.0110 | Val rms_score: 0.3867
328
+ 2025-09-27 16:50:54,546 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Test rms_score: 0.4545
329
+ 2025-09-27 16:50:54,993 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Final Triplicate Test Results — Avg rms_score: 0.4627, Std Dev: 0.0083
logs_modchembert_regression_ModChemBERT-MLM-DAPT-TAFT-OPT/modchembert_deepchem_splits_run_astrazeneca_cl_epochs100_batch_size32_20250926_091804.log ADDED
@@ -0,0 +1,323 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ 2025-09-26 09:18:04,252 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Running benchmark for dataset: astrazeneca_cl
2
+ 2025-09-26 09:18:04,252 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - dataset: astrazeneca_cl, tasks: ['y'], epochs: 100, learning rate: 3e-05, transform: True
3
+ 2025-09-26 09:18:04,261 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Starting triplicate run 1 for dataset astrazeneca_cl at 2025-09-26_09-18-04
4
+ 2025-09-26 09:18:13,646 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 1/100 | Train Loss: 0.6076 | Val rms_score: 0.4261
5
+ 2025-09-26 09:18:13,646 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Global step of best model: 36
6
+ 2025-09-26 09:18:14,257 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Best model saved at epoch 1 with val rms_score: 0.4261
7
+ 2025-09-26 09:18:24,287 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 2/100 | Train Loss: 0.4184 | Val rms_score: 0.4214
8
+ 2025-09-26 09:18:24,509 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Global step of best model: 72
9
+ 2025-09-26 09:18:25,098 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Best model saved at epoch 2 with val rms_score: 0.4214
10
+ 2025-09-26 09:18:34,028 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 3/100 | Train Loss: 0.3281 | Val rms_score: 0.4239
11
+ 2025-09-26 09:18:43,422 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 4/100 | Train Loss: 0.2882 | Val rms_score: 0.4272
12
+ 2025-09-26 09:18:52,432 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 5/100 | Train Loss: 0.2535 | Val rms_score: 0.4182
13
+ 2025-09-26 09:18:52,638 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Global step of best model: 180
14
+ 2025-09-26 09:18:53,173 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Best model saved at epoch 5 with val rms_score: 0.4182
15
+ 2025-09-26 09:19:02,704 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 6/100 | Train Loss: 0.2188 | Val rms_score: 0.4209
16
+ 2025-09-26 09:19:12,349 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 7/100 | Train Loss: 0.1849 | Val rms_score: 0.4360
17
+ 2025-09-26 09:19:21,786 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 8/100 | Train Loss: 0.1432 | Val rms_score: 0.4330
18
+ 2025-09-26 09:19:30,989 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 9/100 | Train Loss: 0.1406 | Val rms_score: 0.4340
19
+ 2025-09-26 09:19:40,009 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 10/100 | Train Loss: 0.1189 | Val rms_score: 0.4307
20
+ 2025-09-26 09:19:49,177 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 11/100 | Train Loss: 0.0981 | Val rms_score: 0.4424
21
+ 2025-09-26 09:19:58,750 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 12/100 | Train Loss: 0.0879 | Val rms_score: 0.4369
22
+ 2025-09-26 09:20:08,037 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 13/100 | Train Loss: 0.0790 | Val rms_score: 0.4411
23
+ 2025-09-26 09:20:17,214 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 14/100 | Train Loss: 0.0586 | Val rms_score: 0.4408
24
+ 2025-09-26 09:20:26,149 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 15/100 | Train Loss: 0.0742 | Val rms_score: 0.4419
25
+ 2025-09-26 09:20:35,203 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 16/100 | Train Loss: 0.0716 | Val rms_score: 0.4334
26
+ 2025-09-26 09:20:44,816 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 17/100 | Train Loss: 0.0765 | Val rms_score: 0.4507
27
+ 2025-09-26 09:20:54,023 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 18/100 | Train Loss: 0.0720 | Val rms_score: 0.4541
28
+ 2025-09-26 09:21:03,202 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 19/100 | Train Loss: 0.0621 | Val rms_score: 0.4424
29
+ 2025-09-26 09:21:11,693 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 20/100 | Train Loss: 0.0645 | Val rms_score: 0.4366
30
+ 2025-09-26 09:21:20,977 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 21/100 | Train Loss: 0.0538 | Val rms_score: 0.4457
31
+ 2025-09-26 09:21:30,499 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 22/100 | Train Loss: 0.0525 | Val rms_score: 0.4511
32
+ 2025-09-26 09:21:39,614 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 23/100 | Train Loss: 0.0541 | Val rms_score: 0.4461
33
+ 2025-09-26 09:21:48,858 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 24/100 | Train Loss: 0.0506 | Val rms_score: 0.4511
34
+ 2025-09-26 09:21:57,591 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 25/100 | Train Loss: 0.0462 | Val rms_score: 0.4444
35
+ 2025-09-26 09:22:06,921 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 26/100 | Train Loss: 0.0464 | Val rms_score: 0.4490
36
+ 2025-09-26 09:22:15,243 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 27/100 | Train Loss: 0.0464 | Val rms_score: 0.4449
37
+ 2025-09-26 09:22:26,835 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 28/100 | Train Loss: 0.0425 | Val rms_score: 0.4508
38
+ 2025-09-26 09:22:36,024 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 29/100 | Train Loss: 0.0410 | Val rms_score: 0.4452
39
+ 2025-09-26 09:22:44,798 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 30/100 | Train Loss: 0.0388 | Val rms_score: 0.4478
40
+ 2025-09-26 09:22:53,844 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 31/100 | Train Loss: 0.0430 | Val rms_score: 0.4542
41
+ 2025-09-26 09:23:03,556 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 32/100 | Train Loss: 0.0373 | Val rms_score: 0.4478
42
+ 2025-09-26 09:23:12,885 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 33/100 | Train Loss: 0.0354 | Val rms_score: 0.4506
43
+ 2025-09-26 09:23:22,102 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 34/100 | Train Loss: 0.0360 | Val rms_score: 0.4526
44
+ 2025-09-26 09:23:31,058 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 35/100 | Train Loss: 0.0330 | Val rms_score: 0.4515
45
+ 2025-09-26 09:23:40,238 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 36/100 | Train Loss: 0.0354 | Val rms_score: 0.4478
46
+ 2025-09-26 09:23:50,013 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 37/100 | Train Loss: 0.0369 | Val rms_score: 0.4476
47
+ 2025-09-26 09:23:59,196 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 38/100 | Train Loss: 0.0336 | Val rms_score: 0.4485
48
+ 2025-09-26 09:24:08,445 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 39/100 | Train Loss: 0.0266 | Val rms_score: 0.4493
49
+ 2025-09-26 09:24:17,280 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 40/100 | Train Loss: 0.0289 | Val rms_score: 0.4459
50
+ 2025-09-26 09:24:26,640 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 41/100 | Train Loss: 0.0293 | Val rms_score: 0.4419
51
+ 2025-09-26 09:24:35,814 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 42/100 | Train Loss: 0.0285 | Val rms_score: 0.4519
52
+ 2025-09-26 09:24:45,047 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 43/100 | Train Loss: 0.0308 | Val rms_score: 0.4493
53
+ 2025-09-26 09:24:54,422 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 44/100 | Train Loss: 0.0295 | Val rms_score: 0.4546
54
+ 2025-09-26 09:25:03,179 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 45/100 | Train Loss: 0.0295 | Val rms_score: 0.4425
55
+ 2025-09-26 09:25:12,377 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 46/100 | Train Loss: 0.0297 | Val rms_score: 0.4525
56
+ 2025-09-26 09:25:21,940 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 47/100 | Train Loss: 0.0297 | Val rms_score: 0.4467
57
+ 2025-09-26 09:25:31,085 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 48/100 | Train Loss: 0.0264 | Val rms_score: 0.4423
58
+ 2025-09-26 09:25:40,540 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 49/100 | Train Loss: 0.0266 | Val rms_score: 0.4418
59
+ 2025-09-26 09:25:49,561 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 50/100 | Train Loss: 0.0273 | Val rms_score: 0.4416
60
+ 2025-09-26 09:25:58,910 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 51/100 | Train Loss: 0.0254 | Val rms_score: 0.4447
61
+ 2025-09-26 09:26:08,577 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 52/100 | Train Loss: 0.0227 | Val rms_score: 0.4486
62
+ 2025-09-26 09:26:17,915 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 53/100 | Train Loss: 0.0231 | Val rms_score: 0.4424
63
+ 2025-09-26 09:26:27,002 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 54/100 | Train Loss: 0.0247 | Val rms_score: 0.4424
64
+ 2025-09-26 09:26:35,628 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 55/100 | Train Loss: 0.0237 | Val rms_score: 0.4490
65
+ 2025-09-26 09:26:46,347 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 56/100 | Train Loss: 0.0239 | Val rms_score: 0.4416
66
+ 2025-09-26 09:26:56,081 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 57/100 | Train Loss: 0.0252 | Val rms_score: 0.4411
67
+ 2025-09-26 09:27:05,371 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 58/100 | Train Loss: 0.0246 | Val rms_score: 0.4455
68
+ 2025-09-26 09:27:14,703 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 59/100 | Train Loss: 0.0247 | Val rms_score: 0.4418
69
+ 2025-09-26 09:27:23,422 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 60/100 | Train Loss: 0.0254 | Val rms_score: 0.4448
70
+ 2025-09-26 09:27:32,713 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 61/100 | Train Loss: 0.0226 | Val rms_score: 0.4405
71
+ 2025-09-26 09:27:42,113 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 62/100 | Train Loss: 0.0231 | Val rms_score: 0.4393
72
+ 2025-09-26 09:27:51,238 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 63/100 | Train Loss: 0.0220 | Val rms_score: 0.4466
73
+ 2025-09-26 09:28:00,477 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 64/100 | Train Loss: 0.0242 | Val rms_score: 0.4427
74
+ 2025-09-26 09:28:09,531 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 65/100 | Train Loss: 0.0214 | Val rms_score: 0.4387
75
+ 2025-09-26 09:28:18,493 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 66/100 | Train Loss: 0.0206 | Val rms_score: 0.4432
76
+ 2025-09-26 09:28:28,173 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 67/100 | Train Loss: 0.0218 | Val rms_score: 0.4397
77
+ 2025-09-26 09:28:37,399 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 68/100 | Train Loss: 0.0193 | Val rms_score: 0.4428
78
+ 2025-09-26 09:28:46,758 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 69/100 | Train Loss: 0.0214 | Val rms_score: 0.4438
79
+ 2025-09-26 09:28:55,728 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 70/100 | Train Loss: 0.0202 | Val rms_score: 0.4418
80
+ 2025-09-26 09:29:04,988 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 71/100 | Train Loss: 0.0201 | Val rms_score: 0.4412
81
+ 2025-09-26 09:29:14,501 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 72/100 | Train Loss: 0.0196 | Val rms_score: 0.4377
82
+ 2025-09-26 09:29:23,651 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 73/100 | Train Loss: 0.0218 | Val rms_score: 0.4430
83
+ 2025-09-26 09:29:33,005 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 74/100 | Train Loss: 0.0193 | Val rms_score: 0.4408
84
+ 2025-09-26 09:29:41,928 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 75/100 | Train Loss: 0.0199 | Val rms_score: 0.4406
85
+ 2025-09-26 09:29:51,227 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 76/100 | Train Loss: 0.0193 | Val rms_score: 0.4356
86
+ 2025-09-26 09:30:00,663 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 77/100 | Train Loss: 0.0203 | Val rms_score: 0.4332
87
+ 2025-09-26 09:30:09,973 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 78/100 | Train Loss: 0.0203 | Val rms_score: 0.4338
88
+ 2025-09-26 09:30:19,018 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 79/100 | Train Loss: 0.0188 | Val rms_score: 0.4391
89
+ 2025-09-26 09:30:27,853 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 80/100 | Train Loss: 0.0206 | Val rms_score: 0.4366
90
+ 2025-09-26 09:30:37,189 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 81/100 | Train Loss: 0.0171 | Val rms_score: 0.4402
91
+ 2025-09-26 09:30:46,481 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 82/100 | Train Loss: 0.0171 | Val rms_score: 0.4372
92
+ 2025-09-26 09:30:56,299 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 83/100 | Train Loss: 0.0189 | Val rms_score: 0.4435
93
+ 2025-09-26 09:31:06,908 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 84/100 | Train Loss: 0.0179 | Val rms_score: 0.4377
94
+ 2025-09-26 09:31:15,784 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 85/100 | Train Loss: 0.0179 | Val rms_score: 0.4377
95
+ 2025-09-26 09:31:25,139 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 86/100 | Train Loss: 0.0170 | Val rms_score: 0.4452
96
+ 2025-09-26 09:31:34,725 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 87/100 | Train Loss: 0.0162 | Val rms_score: 0.4419
97
+ 2025-09-26 09:31:44,059 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 88/100 | Train Loss: 0.0170 | Val rms_score: 0.4394
98
+ 2025-09-26 09:31:53,431 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 89/100 | Train Loss: 0.0131 | Val rms_score: 0.4393
99
+ 2025-09-26 09:32:02,176 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 90/100 | Train Loss: 0.0161 | Val rms_score: 0.4391
100
+ 2025-09-26 09:32:11,405 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 91/100 | Train Loss: 0.0166 | Val rms_score: 0.4372
101
+ 2025-09-26 09:32:21,232 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 92/100 | Train Loss: 0.0185 | Val rms_score: 0.4381
102
+ 2025-09-26 09:32:30,434 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 93/100 | Train Loss: 0.0163 | Val rms_score: 0.4404
103
+ 2025-09-26 09:32:39,690 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 94/100 | Train Loss: 0.0164 | Val rms_score: 0.4426
104
+ 2025-09-26 09:32:48,730 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 95/100 | Train Loss: 0.0146 | Val rms_score: 0.4391
105
+ 2025-09-26 09:32:58,218 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 96/100 | Train Loss: 0.0156 | Val rms_score: 0.4385
106
+ 2025-09-26 09:33:07,710 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 97/100 | Train Loss: 0.0161 | Val rms_score: 0.4405
107
+ 2025-09-26 09:33:17,221 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 98/100 | Train Loss: 0.0162 | Val rms_score: 0.4416
108
+ 2025-09-26 09:33:25,810 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 99/100 | Train Loss: 0.0171 | Val rms_score: 0.4381
109
+ 2025-09-26 09:33:34,575 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 100/100 | Train Loss: 0.0153 | Val rms_score: 0.4375
110
+ 2025-09-26 09:33:35,474 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Test rms_score: 0.5042
111
+ 2025-09-26 09:33:35,782 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Starting triplicate run 2 for dataset astrazeneca_cl at 2025-09-26_09-33-35
112
+ 2025-09-26 09:33:43,892 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 1/100 | Train Loss: 0.5590 | Val rms_score: 0.4334
113
+ 2025-09-26 09:33:43,892 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Global step of best model: 36
114
+ 2025-09-26 09:33:44,571 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Best model saved at epoch 1 with val rms_score: 0.4334
115
+ 2025-09-26 09:33:53,722 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 2/100 | Train Loss: 0.4080 | Val rms_score: 0.4379
116
+ 2025-09-26 09:34:02,965 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 3/100 | Train Loss: 0.3301 | Val rms_score: 0.4073
117
+ 2025-09-26 09:34:03,140 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Global step of best model: 108
118
+ 2025-09-26 09:34:03,691 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Best model saved at epoch 3 with val rms_score: 0.4073
119
+ 2025-09-26 09:34:13,607 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 4/100 | Train Loss: 0.2639 | Val rms_score: 0.4308
120
+ 2025-09-26 09:34:23,077 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 5/100 | Train Loss: 0.2361 | Val rms_score: 0.4464
121
+ 2025-09-26 09:34:32,360 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 6/100 | Train Loss: 0.1934 | Val rms_score: 0.4369
122
+ 2025-09-26 09:34:42,451 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 7/100 | Train Loss: 0.1753 | Val rms_score: 0.4330
123
+ 2025-09-26 09:34:52,290 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 8/100 | Train Loss: 0.1441 | Val rms_score: 0.4458
124
+ 2025-09-26 09:35:02,038 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 9/100 | Train Loss: 0.1283 | Val rms_score: 0.4383
125
+ 2025-09-26 09:35:11,272 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 10/100 | Train Loss: 0.1172 | Val rms_score: 0.4578
126
+ 2025-09-26 09:35:20,502 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 11/100 | Train Loss: 0.1033 | Val rms_score: 0.4332
127
+ 2025-09-26 09:35:30,291 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 12/100 | Train Loss: 0.0835 | Val rms_score: 0.4413
128
+ 2025-09-26 09:35:39,967 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 13/100 | Train Loss: 0.0786 | Val rms_score: 0.4407
129
+ 2025-09-26 09:35:49,621 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 14/100 | Train Loss: 0.0688 | Val rms_score: 0.4483
130
+ 2025-09-26 09:35:58,805 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 15/100 | Train Loss: 0.0651 | Val rms_score: 0.4519
131
+ 2025-09-26 09:36:08,615 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 16/100 | Train Loss: 0.0655 | Val rms_score: 0.4453
132
+ 2025-09-26 09:36:18,681 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 17/100 | Train Loss: 0.0804 | Val rms_score: 0.4619
133
+ 2025-09-26 09:36:28,587 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 18/100 | Train Loss: 0.0703 | Val rms_score: 0.4507
134
+ 2025-09-26 09:36:38,214 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 19/100 | Train Loss: 0.0642 | Val rms_score: 0.4498
135
+ 2025-09-26 09:36:47,874 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 20/100 | Train Loss: 0.0648 | Val rms_score: 0.4495
136
+ 2025-09-26 09:36:57,617 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 21/100 | Train Loss: 0.0549 | Val rms_score: 0.4558
137
+ 2025-09-26 09:37:08,386 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 22/100 | Train Loss: 0.0549 | Val rms_score: 0.4618
138
+ 2025-09-26 09:37:18,769 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 23/100 | Train Loss: 0.0466 | Val rms_score: 0.4428
139
+ 2025-09-26 09:37:28,983 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 24/100 | Train Loss: 0.0503 | Val rms_score: 0.4503
140
+ 2025-09-26 09:37:38,966 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 25/100 | Train Loss: 0.0475 | Val rms_score: 0.4624
141
+ 2025-09-26 09:37:49,061 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 26/100 | Train Loss: 0.0471 | Val rms_score: 0.4465
142
+ 2025-09-26 09:37:58,472 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 27/100 | Train Loss: 0.0445 | Val rms_score: 0.4500
143
+ 2025-09-26 09:38:08,711 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 28/100 | Train Loss: 0.0515 | Val rms_score: 0.4532
144
+ 2025-09-26 09:38:19,225 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 29/100 | Train Loss: 0.0371 | Val rms_score: 0.4506
145
+ 2025-09-26 09:38:29,389 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 30/100 | Train Loss: 0.0408 | Val rms_score: 0.4450
146
+ 2025-09-26 09:38:39,531 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 31/100 | Train Loss: 0.0366 | Val rms_score: 0.4561
147
+ 2025-09-26 09:38:50,366 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 32/100 | Train Loss: 0.0410 | Val rms_score: 0.4545
148
+ 2025-09-26 09:39:00,891 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 33/100 | Train Loss: 0.0378 | Val rms_score: 0.4594
149
+ 2025-09-26 09:39:11,613 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 34/100 | Train Loss: 0.0369 | Val rms_score: 0.4519
150
+ 2025-09-26 09:39:21,882 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 35/100 | Train Loss: 0.0378 | Val rms_score: 0.4563
151
+ 2025-09-26 09:39:31,862 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 36/100 | Train Loss: 0.0375 | Val rms_score: 0.4577
152
+ 2025-09-26 09:39:42,392 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 37/100 | Train Loss: 0.0327 | Val rms_score: 0.4565
153
+ 2025-09-26 09:39:52,701 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 38/100 | Train Loss: 0.0365 | Val rms_score: 0.4501
154
+ 2025-09-26 09:40:02,771 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 39/100 | Train Loss: 0.0327 | Val rms_score: 0.4465
155
+ 2025-09-26 09:40:12,611 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 40/100 | Train Loss: 0.0326 | Val rms_score: 0.4511
156
+ 2025-09-26 09:40:22,381 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 41/100 | Train Loss: 0.0293 | Val rms_score: 0.4547
157
+ 2025-09-26 09:40:32,959 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 42/100 | Train Loss: 0.0304 | Val rms_score: 0.4527
158
+ 2025-09-26 09:40:43,086 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 43/100 | Train Loss: 0.0271 | Val rms_score: 0.4481
159
+ 2025-09-26 09:40:53,360 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 44/100 | Train Loss: 0.0302 | Val rms_score: 0.4502
160
+ 2025-09-26 09:41:03,071 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 45/100 | Train Loss: 0.0270 | Val rms_score: 0.4497
161
+ 2025-09-26 09:41:12,666 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 46/100 | Train Loss: 0.0295 | Val rms_score: 0.4507
162
+ 2025-09-26 09:41:22,759 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 47/100 | Train Loss: 0.0247 | Val rms_score: 0.4563
163
+ 2025-09-26 09:41:32,483 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 48/100 | Train Loss: 0.0265 | Val rms_score: 0.4492
164
+ 2025-09-26 09:41:42,193 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 49/100 | Train Loss: 0.0264 | Val rms_score: 0.4505
165
+ 2025-09-26 09:41:51,260 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 50/100 | Train Loss: 0.0259 | Val rms_score: 0.4518
166
+ 2025-09-26 09:42:00,331 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 51/100 | Train Loss: 0.0237 | Val rms_score: 0.4468
167
+ 2025-09-26 09:42:10,228 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 52/100 | Train Loss: 0.0256 | Val rms_score: 0.4531
168
+ 2025-09-26 09:42:19,861 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 53/100 | Train Loss: 0.0231 | Val rms_score: 0.4548
169
+ 2025-09-26 09:42:28,865 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 54/100 | Train Loss: 0.0258 | Val rms_score: 0.4534
170
+ 2025-09-26 09:42:37,601 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 55/100 | Train Loss: 0.0269 | Val rms_score: 0.4488
171
+ 2025-09-26 09:42:49,110 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 56/100 | Train Loss: 0.0239 | Val rms_score: 0.4518
172
+ 2025-09-26 09:42:58,294 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 57/100 | Train Loss: 0.0234 | Val rms_score: 0.4419
173
+ 2025-09-26 09:43:07,311 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 58/100 | Train Loss: 0.0242 | Val rms_score: 0.4520
174
+ 2025-09-26 09:43:16,487 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 59/100 | Train Loss: 0.0229 | Val rms_score: 0.4493
175
+ 2025-09-26 09:43:25,318 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 60/100 | Train Loss: 0.0197 | Val rms_score: 0.4529
176
+ 2025-09-26 09:43:34,383 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 61/100 | Train Loss: 0.0237 | Val rms_score: 0.4446
177
+ 2025-09-26 09:43:43,998 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 62/100 | Train Loss: 0.0244 | Val rms_score: 0.4494
178
+ 2025-09-26 09:43:53,229 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 63/100 | Train Loss: 0.0208 | Val rms_score: 0.4509
179
+ 2025-09-26 09:44:02,244 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 64/100 | Train Loss: 0.0266 | Val rms_score: 0.4445
180
+ 2025-09-26 09:44:11,048 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 65/100 | Train Loss: 0.0230 | Val rms_score: 0.4503
181
+ 2025-09-26 09:44:20,513 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 66/100 | Train Loss: 0.0196 | Val rms_score: 0.4490
182
+ 2025-09-26 09:44:30,239 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 67/100 | Train Loss: 0.0215 | Val rms_score: 0.4459
183
+ 2025-09-26 09:44:39,412 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 68/100 | Train Loss: 0.0201 | Val rms_score: 0.4498
184
+ 2025-09-26 09:44:48,731 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 69/100 | Train Loss: 0.0208 | Val rms_score: 0.4489
185
+ 2025-09-26 09:44:57,154 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 70/100 | Train Loss: 0.0190 | Val rms_score: 0.4468
186
+ 2025-09-26 09:45:06,401 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 71/100 | Train Loss: 0.0205 | Val rms_score: 0.4480
187
+ 2025-09-26 09:45:15,922 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 72/100 | Train Loss: 0.0194 | Val rms_score: 0.4482
188
+ 2025-09-26 09:45:24,992 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 73/100 | Train Loss: 0.0184 | Val rms_score: 0.4484
189
+ 2025-09-26 09:45:34,397 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 74/100 | Train Loss: 0.0188 | Val rms_score: 0.4450
190
+ 2025-09-26 09:45:43,300 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 75/100 | Train Loss: 0.0186 | Val rms_score: 0.4467
191
+ 2025-09-26 09:45:52,471 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 76/100 | Train Loss: 0.0182 | Val rms_score: 0.4451
192
+ 2025-09-26 09:46:01,888 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 77/100 | Train Loss: 0.0183 | Val rms_score: 0.4459
193
+ 2025-09-26 09:46:10,954 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 78/100 | Train Loss: 0.0170 | Val rms_score: 0.4473
194
+ 2025-09-26 09:46:20,459 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 79/100 | Train Loss: 0.0188 | Val rms_score: 0.4470
195
+ 2025-09-26 09:46:28,831 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 80/100 | Train Loss: 0.0178 | Val rms_score: 0.4450
196
+ 2025-09-26 09:46:37,977 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 81/100 | Train Loss: 0.0178 | Val rms_score: 0.4458
197
+ 2025-09-26 09:46:47,529 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 82/100 | Train Loss: 0.0177 | Val rms_score: 0.4454
198
+ 2025-09-26 09:46:56,896 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 83/100 | Train Loss: 0.0191 | Val rms_score: 0.4495
199
+ 2025-09-26 09:47:07,237 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 84/100 | Train Loss: 0.0189 | Val rms_score: 0.4502
200
+ 2025-09-26 09:47:15,691 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 85/100 | Train Loss: 0.0169 | Val rms_score: 0.4470
201
+ 2025-09-26 09:47:24,728 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 86/100 | Train Loss: 0.0180 | Val rms_score: 0.4456
202
+ 2025-09-26 09:47:34,329 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 87/100 | Train Loss: 0.0177 | Val rms_score: 0.4491
203
+ 2025-09-26 09:47:43,318 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 88/100 | Train Loss: 0.0178 | Val rms_score: 0.4441
204
+ 2025-09-26 09:47:52,658 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 89/100 | Train Loss: 0.0145 | Val rms_score: 0.4423
205
+ 2025-09-26 09:48:01,552 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 90/100 | Train Loss: 0.0170 | Val rms_score: 0.4467
206
+ 2025-09-26 09:48:10,911 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 91/100 | Train Loss: 0.0148 | Val rms_score: 0.4435
207
+ 2025-09-26 09:48:20,510 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 92/100 | Train Loss: 0.0156 | Val rms_score: 0.4452
208
+ 2025-09-26 09:48:29,773 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 93/100 | Train Loss: 0.0170 | Val rms_score: 0.4455
209
+ 2025-09-26 09:48:39,070 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 94/100 | Train Loss: 0.0154 | Val rms_score: 0.4490
210
+ 2025-09-26 09:48:47,914 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 95/100 | Train Loss: 0.0165 | Val rms_score: 0.4465
211
+ 2025-09-26 09:48:57,494 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 96/100 | Train Loss: 0.0153 | Val rms_score: 0.4478
212
+ 2025-09-26 09:49:06,969 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 97/100 | Train Loss: 0.0143 | Val rms_score: 0.4476
213
+ 2025-09-26 09:49:16,104 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 98/100 | Train Loss: 0.0161 | Val rms_score: 0.4474
214
+ 2025-09-26 09:49:25,032 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 99/100 | Train Loss: 0.0163 | Val rms_score: 0.4441
215
+ 2025-09-26 09:49:34,680 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 100/100 | Train Loss: 0.0164 | Val rms_score: 0.4452
216
+ 2025-09-26 09:49:35,586 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Test rms_score: 0.4860
217
+ 2025-09-26 09:49:35,888 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Starting triplicate run 3 for dataset astrazeneca_cl at 2025-09-26_09-49-35
218
+ 2025-09-26 09:49:43,504 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 1/100 | Train Loss: 0.5938 | Val rms_score: 0.4181
219
+ 2025-09-26 09:49:43,505 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Global step of best model: 36
220
+ 2025-09-26 09:49:44,057 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Best model saved at epoch 1 with val rms_score: 0.4181
221
+ 2025-09-26 09:49:53,537 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 2/100 | Train Loss: 0.4201 | Val rms_score: 0.4260
222
+ 2025-09-26 09:50:02,675 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 3/100 | Train Loss: 0.3457 | Val rms_score: 0.4165
223
+ 2025-09-26 09:50:02,828 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Global step of best model: 108
224
+ 2025-09-26 09:50:03,440 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Best model saved at epoch 3 with val rms_score: 0.4165
225
+ 2025-09-26 09:50:12,952 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 4/100 | Train Loss: 0.2951 | Val rms_score: 0.4266
226
+ 2025-09-26 09:50:21,519 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 5/100 | Train Loss: 0.2413 | Val rms_score: 0.4232
227
+ 2025-09-26 09:50:30,826 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 6/100 | Train Loss: 0.1904 | Val rms_score: 0.4287
228
+ 2025-09-26 09:50:40,401 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 7/100 | Train Loss: 0.1641 | Val rms_score: 0.4295
229
+ 2025-09-26 09:50:49,769 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 8/100 | Train Loss: 0.1432 | Val rms_score: 0.4336
230
+ 2025-09-26 09:50:59,310 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 9/100 | Train Loss: 0.1328 | Val rms_score: 0.4264
231
+ 2025-09-26 09:51:07,885 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 10/100 | Train Loss: 0.1250 | Val rms_score: 0.4305
232
+ 2025-09-26 09:51:17,118 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 11/100 | Train Loss: 0.1033 | Val rms_score: 0.4281
233
+ 2025-09-26 09:51:26,567 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 12/100 | Train Loss: 0.0996 | Val rms_score: 0.4315
234
+ 2025-09-26 09:51:35,810 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 13/100 | Train Loss: 0.0833 | Val rms_score: 0.4283
235
+ 2025-09-26 09:51:45,212 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 14/100 | Train Loss: 0.0684 | Val rms_score: 0.4347
236
+ 2025-09-26 09:51:54,266 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 15/100 | Train Loss: 0.0742 | Val rms_score: 0.4313
237
+ 2025-09-26 09:52:03,497 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 16/100 | Train Loss: 0.0677 | Val rms_score: 0.4468
238
+ 2025-09-26 09:52:13,157 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 17/100 | Train Loss: 0.0924 | Val rms_score: 0.4251
239
+ 2025-09-26 09:52:22,401 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 18/100 | Train Loss: 0.0638 | Val rms_score: 0.4336
240
+ 2025-09-26 09:52:31,674 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 19/100 | Train Loss: 0.0616 | Val rms_score: 0.4391
241
+ 2025-09-26 09:52:40,574 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 20/100 | Train Loss: 0.0602 | Val rms_score: 0.4274
242
+ 2025-09-26 09:52:49,948 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 21/100 | Train Loss: 0.0519 | Val rms_score: 0.4405
243
+ 2025-09-26 09:52:59,413 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 22/100 | Train Loss: 0.0514 | Val rms_score: 0.4369
244
+ 2025-09-26 09:53:08,515 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 23/100 | Train Loss: 0.0497 | Val rms_score: 0.4443
245
+ 2025-09-26 09:53:17,569 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 24/100 | Train Loss: 0.0497 | Val rms_score: 0.4424
246
+ 2025-09-26 09:53:26,234 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 25/100 | Train Loss: 0.0471 | Val rms_score: 0.4466
247
+ 2025-09-26 09:53:35,567 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 26/100 | Train Loss: 0.0454 | Val rms_score: 0.4401
248
+ 2025-09-26 09:53:43,808 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 27/100 | Train Loss: 0.0412 | Val rms_score: 0.4365
249
+ 2025-09-26 09:53:55,313 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 28/100 | Train Loss: 0.0459 | Val rms_score: 0.4430
250
+ 2025-09-26 09:54:04,608 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 29/100 | Train Loss: 0.0419 | Val rms_score: 0.4347
251
+ 2025-09-26 09:54:13,056 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 30/100 | Train Loss: 0.0384 | Val rms_score: 0.4410
252
+ 2025-09-26 09:54:22,308 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 31/100 | Train Loss: 0.0425 | Val rms_score: 0.4422
253
+ 2025-09-26 09:54:32,037 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 32/100 | Train Loss: 0.0354 | Val rms_score: 0.4384
254
+ 2025-09-26 09:54:41,127 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 33/100 | Train Loss: 0.0397 | Val rms_score: 0.4384
255
+ 2025-09-26 09:54:50,210 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 34/100 | Train Loss: 0.0356 | Val rms_score: 0.4449
256
+ 2025-09-26 09:54:59,123 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 35/100 | Train Loss: 0.0345 | Val rms_score: 0.4377
257
+ 2025-09-26 09:55:08,404 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 36/100 | Train Loss: 0.0319 | Val rms_score: 0.4433
258
+ 2025-09-26 09:55:17,729 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 37/100 | Train Loss: 0.0308 | Val rms_score: 0.4438
259
+ 2025-09-26 09:55:26,881 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 38/100 | Train Loss: 0.0332 | Val rms_score: 0.4407
260
+ 2025-09-26 09:55:35,890 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 39/100 | Train Loss: 0.0322 | Val rms_score: 0.4371
261
+ 2025-09-26 09:55:45,322 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 40/100 | Train Loss: 0.0341 | Val rms_score: 0.4406
262
+ 2025-09-26 09:55:54,832 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 41/100 | Train Loss: 0.0356 | Val rms_score: 0.4406
263
+ 2025-09-26 09:56:04,701 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 42/100 | Train Loss: 0.0306 | Val rms_score: 0.4399
264
+ 2025-09-26 09:56:13,966 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 43/100 | Train Loss: 0.0304 | Val rms_score: 0.4448
265
+ 2025-09-26 09:56:23,506 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 44/100 | Train Loss: 0.0343 | Val rms_score: 0.4392
266
+ 2025-09-26 09:56:32,957 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 45/100 | Train Loss: 0.0309 | Val rms_score: 0.4388
267
+ 2025-09-26 09:56:42,647 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 46/100 | Train Loss: 0.0276 | Val rms_score: 0.4359
268
+ 2025-09-26 09:56:52,332 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 47/100 | Train Loss: 0.0240 | Val rms_score: 0.4376
269
+ 2025-09-26 09:57:01,743 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 48/100 | Train Loss: 0.0280 | Val rms_score: 0.4383
270
+ 2025-09-26 09:57:11,255 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 49/100 | Train Loss: 0.0256 | Val rms_score: 0.4353
271
+ 2025-09-26 09:57:20,641 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 50/100 | Train Loss: 0.0256 | Val rms_score: 0.4369
272
+ 2025-09-26 09:57:30,468 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 51/100 | Train Loss: 0.0263 | Val rms_score: 0.4420
273
+ 2025-09-26 09:57:40,262 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 52/100 | Train Loss: 0.0255 | Val rms_score: 0.4392
274
+ 2025-09-26 09:57:49,815 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 53/100 | Train Loss: 0.0238 | Val rms_score: 0.4435
275
+ 2025-09-26 09:57:59,581 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 54/100 | Train Loss: 0.0250 | Val rms_score: 0.4369
276
+ 2025-09-26 09:58:08,919 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 55/100 | Train Loss: 0.0245 | Val rms_score: 0.4388
277
+ 2025-09-26 09:58:19,640 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 56/100 | Train Loss: 0.0249 | Val rms_score: 0.4378
278
+ 2025-09-26 09:58:29,664 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 57/100 | Train Loss: 0.0217 | Val rms_score: 0.4384
279
+ 2025-09-26 09:58:39,396 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 58/100 | Train Loss: 0.0226 | Val rms_score: 0.4371
280
+ 2025-09-26 09:58:49,139 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 59/100 | Train Loss: 0.0256 | Val rms_score: 0.4387
281
+ 2025-09-26 09:58:58,947 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 60/100 | Train Loss: 0.0244 | Val rms_score: 0.4427
282
+ 2025-09-26 09:59:09,059 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 61/100 | Train Loss: 0.0240 | Val rms_score: 0.4377
283
+ 2025-09-26 09:59:18,988 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 62/100 | Train Loss: 0.0232 | Val rms_score: 0.4401
284
+ 2025-09-26 09:59:28,771 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 63/100 | Train Loss: 0.0217 | Val rms_score: 0.4395
285
+ 2025-09-26 09:59:38,933 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 64/100 | Train Loss: 0.0203 | Val rms_score: 0.4425
286
+ 2025-09-26 09:59:48,613 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 65/100 | Train Loss: 0.0212 | Val rms_score: 0.4355
287
+ 2025-09-26 09:59:58,687 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 66/100 | Train Loss: 0.0215 | Val rms_score: 0.4400
288
+ 2025-09-26 10:00:08,699 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 67/100 | Train Loss: 0.0218 | Val rms_score: 0.4400
289
+ 2025-09-26 10:00:17,914 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 68/100 | Train Loss: 0.0229 | Val rms_score: 0.4376
290
+ 2025-09-26 10:00:27,174 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 69/100 | Train Loss: 0.0190 | Val rms_score: 0.4390
291
+ 2025-09-26 10:00:36,455 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 70/100 | Train Loss: 0.0217 | Val rms_score: 0.4415
292
+ 2025-09-26 10:00:45,891 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 71/100 | Train Loss: 0.0179 | Val rms_score: 0.4421
293
+ 2025-09-26 10:00:55,348 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 72/100 | Train Loss: 0.0184 | Val rms_score: 0.4411
294
+ 2025-09-26 10:01:04,725 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 73/100 | Train Loss: 0.0209 | Val rms_score: 0.4358
295
+ 2025-09-26 10:01:14,259 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 74/100 | Train Loss: 0.0199 | Val rms_score: 0.4398
296
+ 2025-09-26 10:01:23,230 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 75/100 | Train Loss: 0.0179 | Val rms_score: 0.4333
297
+ 2025-09-26 10:01:32,247 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 76/100 | Train Loss: 0.0191 | Val rms_score: 0.4384
298
+ 2025-09-26 10:01:42,001 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 77/100 | Train Loss: 0.0196 | Val rms_score: 0.4392
299
+ 2025-09-26 10:01:50,797 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 78/100 | Train Loss: 0.0192 | Val rms_score: 0.4400
300
+ 2025-09-26 10:02:00,121 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 79/100 | Train Loss: 0.0175 | Val rms_score: 0.4378
301
+ 2025-09-26 10:02:09,148 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 80/100 | Train Loss: 0.0182 | Val rms_score: 0.4411
302
+ 2025-09-26 10:02:18,580 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 81/100 | Train Loss: 0.0168 | Val rms_score: 0.4387
303
+ 2025-09-26 10:02:27,998 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 82/100 | Train Loss: 0.0173 | Val rms_score: 0.4353
304
+ 2025-09-26 10:02:37,639 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 83/100 | Train Loss: 0.0178 | Val rms_score: 0.4340
305
+ 2025-09-26 10:02:47,858 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 84/100 | Train Loss: 0.0179 | Val rms_score: 0.4365
306
+ 2025-09-26 10:02:56,792 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 85/100 | Train Loss: 0.0163 | Val rms_score: 0.4382
307
+ 2025-09-26 10:03:05,951 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 86/100 | Train Loss: 0.0186 | Val rms_score: 0.4373
308
+ 2025-09-26 10:03:15,631 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 87/100 | Train Loss: 0.0176 | Val rms_score: 0.4371
309
+ 2025-09-26 10:03:24,723 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 88/100 | Train Loss: 0.0171 | Val rms_score: 0.4376
310
+ 2025-09-26 10:03:33,931 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 89/100 | Train Loss: 0.0137 | Val rms_score: 0.4353
311
+ 2025-09-26 10:03:42,891 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 90/100 | Train Loss: 0.0169 | Val rms_score: 0.4395
312
+ 2025-09-26 10:03:52,292 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 91/100 | Train Loss: 0.0150 | Val rms_score: 0.4314
313
+ 2025-09-26 10:04:02,006 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 92/100 | Train Loss: 0.0186 | Val rms_score: 0.4360
314
+ 2025-09-26 10:04:11,202 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 93/100 | Train Loss: 0.0158 | Val rms_score: 0.4390
315
+ 2025-09-26 10:04:20,601 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 94/100 | Train Loss: 0.0156 | Val rms_score: 0.4357
316
+ 2025-09-26 10:04:29,346 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 95/100 | Train Loss: 0.0153 | Val rms_score: 0.4328
317
+ 2025-09-26 10:04:38,887 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 96/100 | Train Loss: 0.0158 | Val rms_score: 0.4369
318
+ 2025-09-26 10:04:48,369 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 97/100 | Train Loss: 0.0169 | Val rms_score: 0.4387
319
+ 2025-09-26 10:04:57,847 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 98/100 | Train Loss: 0.0143 | Val rms_score: 0.4403
320
+ 2025-09-26 10:05:06,681 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 99/100 | Train Loss: 0.0168 | Val rms_score: 0.4379
321
+ 2025-09-26 10:05:18,535 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 100/100 | Train Loss: 0.0153 | Val rms_score: 0.4329
322
+ 2025-09-26 10:05:19,458 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Test rms_score: 0.4894
323
+ 2025-09-26 10:05:19,761 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Final Triplicate Test Results — Avg rms_score: 0.4932, Std Dev: 0.0079
logs_modchembert_regression_ModChemBERT-MLM-DAPT-TAFT-OPT/modchembert_deepchem_splits_run_astrazeneca_logd74_epochs100_batch_size16_20250927_204252.log ADDED
@@ -0,0 +1,365 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ 2025-09-27 20:42:52,583 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Running benchmark for dataset: astrazeneca_logd74
2
+ 2025-09-27 20:42:52,583 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - dataset: astrazeneca_logd74, tasks: ['y'], epochs: 100, learning rate: 3e-05, transform: True
3
+ 2025-09-27 20:42:52,590 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Starting triplicate run 1 for dataset astrazeneca_logd74 at 2025-09-27_20-42-52
4
+ 2025-09-27 20:43:12,107 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 1/100 | Train Loss: 0.3547 | Val rms_score: 0.7262
5
+ 2025-09-27 20:43:12,107 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Global step of best model: 210
6
+ 2025-09-27 20:43:12,405 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Best model saved at epoch 1 with val rms_score: 0.7262
7
+ 2025-09-27 20:43:30,796 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 2/100 | Train Loss: 0.2609 | Val rms_score: 0.7113
8
+ 2025-09-27 20:43:31,029 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Global step of best model: 420
9
+ 2025-09-27 20:43:32,014 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Best model saved at epoch 2 with val rms_score: 0.7113
10
+ 2025-09-27 20:43:48,205 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 3/100 | Train Loss: 0.2292 | Val rms_score: 0.6861
11
+ 2025-09-27 20:43:48,375 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Global step of best model: 630
12
+ 2025-09-27 20:43:49,102 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Best model saved at epoch 3 with val rms_score: 0.6861
13
+ 2025-09-27 20:44:07,017 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 4/100 | Train Loss: 0.1727 | Val rms_score: 0.6754
14
+ 2025-09-27 20:44:07,238 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Global step of best model: 840
15
+ 2025-09-27 20:44:07,917 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Best model saved at epoch 4 with val rms_score: 0.6754
16
+ 2025-09-27 20:44:25,832 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 5/100 | Train Loss: 0.1487 | Val rms_score: 0.6892
17
+ 2025-09-27 20:44:43,286 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 6/100 | Train Loss: 0.1583 | Val rms_score: 0.6823
18
+ 2025-09-27 20:45:03,159 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 7/100 | Train Loss: 0.1455 | Val rms_score: 0.6730
19
+ 2025-09-27 20:45:03,419 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Global step of best model: 1470
20
+ 2025-09-27 20:45:04,196 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Best model saved at epoch 7 with val rms_score: 0.6730
21
+ 2025-09-27 20:45:20,536 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 8/100 | Train Loss: 0.1086 | Val rms_score: 0.6667
22
+ 2025-09-27 20:45:20,753 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Global step of best model: 1680
23
+ 2025-09-27 20:45:21,442 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Best model saved at epoch 8 with val rms_score: 0.6667
24
+ 2025-09-27 20:45:37,859 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 9/100 | Train Loss: 0.1104 | Val rms_score: 0.6780
25
+ 2025-09-27 20:45:57,708 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 10/100 | Train Loss: 0.0900 | Val rms_score: 0.6685
26
+ 2025-09-27 20:46:13,233 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 11/100 | Train Loss: 0.0816 | Val rms_score: 0.6750
27
+ 2025-09-27 20:46:29,122 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 12/100 | Train Loss: 0.0688 | Val rms_score: 0.6640
28
+ 2025-09-27 20:46:29,336 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Global step of best model: 2520
29
+ 2025-09-27 20:46:30,223 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Best model saved at epoch 12 with val rms_score: 0.6640
30
+ 2025-09-27 20:46:48,704 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 13/100 | Train Loss: 0.0693 | Val rms_score: 0.6663
31
+ 2025-09-27 20:47:05,976 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 14/100 | Train Loss: 0.0742 | Val rms_score: 0.6707
32
+ 2025-09-27 20:47:23,755 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 15/100 | Train Loss: 0.0781 | Val rms_score: 0.6836
33
+ 2025-09-27 20:47:40,346 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 16/100 | Train Loss: 0.0589 | Val rms_score: 0.6691
34
+ 2025-09-27 20:47:55,801 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 17/100 | Train Loss: 0.0629 | Val rms_score: 0.6620
35
+ 2025-09-27 20:47:55,987 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Global step of best model: 3570
36
+ 2025-09-27 20:47:57,098 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Best model saved at epoch 17 with val rms_score: 0.6620
37
+ 2025-09-27 20:48:15,173 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 18/100 | Train Loss: 0.0617 | Val rms_score: 0.6638
38
+ 2025-09-27 20:48:29,772 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 19/100 | Train Loss: 0.0649 | Val rms_score: 0.6766
39
+ 2025-09-27 20:48:45,624 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 20/100 | Train Loss: 0.0537 | Val rms_score: 0.6780
40
+ 2025-09-27 20:49:03,350 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 21/100 | Train Loss: 0.0605 | Val rms_score: 0.6812
41
+ 2025-09-27 20:49:19,726 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 22/100 | Train Loss: 0.0555 | Val rms_score: 0.6653
42
+ 2025-09-27 20:49:36,980 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 23/100 | Train Loss: 0.0523 | Val rms_score: 0.6637
43
+ 2025-09-27 20:49:52,576 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 24/100 | Train Loss: 0.0523 | Val rms_score: 0.6659
44
+ 2025-09-27 20:50:07,834 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 25/100 | Train Loss: 0.0509 | Val rms_score: 0.6683
45
+ 2025-09-27 20:50:25,696 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 26/100 | Train Loss: 0.0513 | Val rms_score: 0.6769
46
+ 2025-09-27 20:50:42,287 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 27/100 | Train Loss: 0.0507 | Val rms_score: 0.6626
47
+ 2025-09-27 20:50:59,130 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 28/100 | Train Loss: 0.0482 | Val rms_score: 0.6705
48
+ 2025-09-27 20:51:16,204 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 29/100 | Train Loss: 0.0444 | Val rms_score: 0.6709
49
+ 2025-09-27 20:51:31,893 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 30/100 | Train Loss: 0.0434 | Val rms_score: 0.6718
50
+ 2025-09-27 20:51:50,012 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 31/100 | Train Loss: 0.0404 | Val rms_score: 0.6787
51
+ 2025-09-27 20:52:05,878 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 32/100 | Train Loss: 0.0430 | Val rms_score: 0.6814
52
+ 2025-09-27 20:52:23,222 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 33/100 | Train Loss: 0.0443 | Val rms_score: 0.6620
53
+ 2025-09-27 20:52:23,449 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Global step of best model: 6930
54
+ 2025-09-27 20:52:24,189 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Best model saved at epoch 33 with val rms_score: 0.6620
55
+ 2025-09-27 20:52:40,297 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 34/100 | Train Loss: 0.0436 | Val rms_score: 0.6668
56
+ 2025-09-27 20:52:56,726 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 35/100 | Train Loss: 0.0425 | Val rms_score: 0.6670
57
+ 2025-09-27 20:53:13,157 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 36/100 | Train Loss: 0.0396 | Val rms_score: 0.6599
58
+ 2025-09-27 20:53:13,867 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Global step of best model: 7560
59
+ 2025-09-27 20:53:14,571 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Best model saved at epoch 36 with val rms_score: 0.6599
60
+ 2025-09-27 20:53:28,813 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 37/100 | Train Loss: 0.0377 | Val rms_score: 0.6705
61
+ 2025-09-27 20:53:46,518 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 38/100 | Train Loss: 0.0410 | Val rms_score: 0.6665
62
+ 2025-09-27 20:54:03,461 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 39/100 | Train Loss: 0.0385 | Val rms_score: 0.6743
63
+ 2025-09-27 20:54:17,202 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 40/100 | Train Loss: 0.0409 | Val rms_score: 0.6668
64
+ 2025-09-27 20:54:33,384 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 41/100 | Train Loss: 0.0354 | Val rms_score: 0.6566
65
+ 2025-09-27 20:54:33,873 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Global step of best model: 8610
66
+ 2025-09-27 20:54:34,552 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Best model saved at epoch 41 with val rms_score: 0.6566
67
+ 2025-09-27 20:54:50,055 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 42/100 | Train Loss: 0.0391 | Val rms_score: 0.6584
68
+ 2025-09-27 20:55:08,447 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 43/100 | Train Loss: 0.0380 | Val rms_score: 0.6695
69
+ 2025-09-27 20:55:22,593 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 44/100 | Train Loss: 0.0350 | Val rms_score: 0.6637
70
+ 2025-09-27 20:55:38,967 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 45/100 | Train Loss: 0.0389 | Val rms_score: 0.6698
71
+ 2025-09-27 20:55:53,351 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 46/100 | Train Loss: 0.0370 | Val rms_score: 0.6651
72
+ 2025-09-27 20:56:08,998 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 47/100 | Train Loss: 0.0364 | Val rms_score: 0.6667
73
+ 2025-09-27 20:56:27,532 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 48/100 | Train Loss: 0.0385 | Val rms_score: 0.6636
74
+ 2025-09-27 20:56:41,407 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 49/100 | Train Loss: 0.0316 | Val rms_score: 0.6637
75
+ 2025-09-27 20:56:58,618 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 50/100 | Train Loss: 0.0339 | Val rms_score: 0.6718
76
+ 2025-09-27 20:57:14,113 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 51/100 | Train Loss: 0.0324 | Val rms_score: 0.6761
77
+ 2025-09-27 20:57:30,305 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 52/100 | Train Loss: 0.0371 | Val rms_score: 0.6610
78
+ 2025-09-27 20:57:48,198 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 53/100 | Train Loss: 0.0302 | Val rms_score: 0.6623
79
+ 2025-09-27 20:58:02,951 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 54/100 | Train Loss: 0.0344 | Val rms_score: 0.6600
80
+ 2025-09-27 20:58:21,734 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 55/100 | Train Loss: 0.0358 | Val rms_score: 0.6669
81
+ 2025-09-27 20:58:37,064 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 56/100 | Train Loss: 0.0326 | Val rms_score: 0.6654
82
+ 2025-09-27 20:58:53,994 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 57/100 | Train Loss: 0.0326 | Val rms_score: 0.6705
83
+ 2025-09-27 20:59:09,642 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 58/100 | Train Loss: 0.0303 | Val rms_score: 0.6712
84
+ 2025-09-27 20:59:25,661 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 59/100 | Train Loss: 0.0325 | Val rms_score: 0.6700
85
+ 2025-09-27 20:59:43,060 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 60/100 | Train Loss: 0.0314 | Val rms_score: 0.6641
86
+ 2025-09-27 20:59:57,225 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 61/100 | Train Loss: 0.0328 | Val rms_score: 0.6726
87
+ 2025-09-27 21:00:15,123 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 62/100 | Train Loss: 0.0299 | Val rms_score: 0.6712
88
+ 2025-09-27 21:00:30,465 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 63/100 | Train Loss: 0.0279 | Val rms_score: 0.6608
89
+ 2025-09-27 21:00:46,081 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 64/100 | Train Loss: 0.0295 | Val rms_score: 0.6557
90
+ 2025-09-27 21:00:46,261 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Global step of best model: 13440
91
+ 2025-09-27 21:00:46,968 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Best model saved at epoch 64 with val rms_score: 0.6557
92
+ 2025-09-27 21:01:03,929 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 65/100 | Train Loss: 0.0302 | Val rms_score: 0.6682
93
+ 2025-09-27 21:01:18,146 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 66/100 | Train Loss: 0.0303 | Val rms_score: 0.6678
94
+ 2025-09-27 21:01:37,268 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 67/100 | Train Loss: 0.0306 | Val rms_score: 0.6613
95
+ 2025-09-27 21:01:52,885 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 68/100 | Train Loss: 0.0305 | Val rms_score: 0.6681
96
+ 2025-09-27 21:02:07,517 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 69/100 | Train Loss: 0.0295 | Val rms_score: 0.6668
97
+ 2025-09-27 21:02:23,978 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 70/100 | Train Loss: 0.0308 | Val rms_score: 0.6692
98
+ 2025-09-27 21:02:38,598 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 71/100 | Train Loss: 0.0314 | Val rms_score: 0.6653
99
+ 2025-09-27 21:02:57,927 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 72/100 | Train Loss: 0.0328 | Val rms_score: 0.6631
100
+ 2025-09-27 21:03:13,006 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 73/100 | Train Loss: 0.0275 | Val rms_score: 0.6608
101
+ 2025-09-27 21:03:29,149 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 74/100 | Train Loss: 0.0322 | Val rms_score: 0.6668
102
+ 2025-09-27 21:03:43,507 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 75/100 | Train Loss: 0.0309 | Val rms_score: 0.6671
103
+ 2025-09-27 21:03:59,227 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 76/100 | Train Loss: 0.0267 | Val rms_score: 0.6668
104
+ 2025-09-27 21:04:18,370 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 77/100 | Train Loss: 0.0288 | Val rms_score: 0.6666
105
+ 2025-09-27 21:04:32,547 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 78/100 | Train Loss: 0.0299 | Val rms_score: 0.6631
106
+ 2025-09-27 21:04:52,316 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 79/100 | Train Loss: 0.0288 | Val rms_score: 0.6619
107
+ 2025-09-27 21:05:08,352 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 80/100 | Train Loss: 0.0277 | Val rms_score: 0.6692
108
+ 2025-09-27 21:05:26,544 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 81/100 | Train Loss: 0.0326 | Val rms_score: 0.6617
109
+ 2025-09-27 21:05:45,380 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 82/100 | Train Loss: 0.0258 | Val rms_score: 0.6660
110
+ 2025-09-27 21:06:00,938 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 83/100 | Train Loss: 0.0260 | Val rms_score: 0.6673
111
+ 2025-09-27 21:06:17,053 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 84/100 | Train Loss: 0.0285 | Val rms_score: 0.6693
112
+ 2025-09-27 21:06:37,870 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 85/100 | Train Loss: 0.0277 | Val rms_score: 0.6672
113
+ 2025-09-27 21:06:54,341 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 86/100 | Train Loss: 0.0302 | Val rms_score: 0.6658
114
+ 2025-09-27 21:07:12,924 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 87/100 | Train Loss: 0.0280 | Val rms_score: 0.6625
115
+ 2025-09-27 21:07:28,417 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 88/100 | Train Loss: 0.0268 | Val rms_score: 0.6721
116
+ 2025-09-27 21:07:43,137 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 89/100 | Train Loss: 0.0267 | Val rms_score: 0.6669
117
+ 2025-09-27 21:07:59,828 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 90/100 | Train Loss: 0.0266 | Val rms_score: 0.6599
118
+ 2025-09-27 21:08:15,888 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 91/100 | Train Loss: 0.0260 | Val rms_score: 0.6602
119
+ 2025-09-27 21:08:32,802 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 92/100 | Train Loss: 0.0283 | Val rms_score: 0.6664
120
+ 2025-09-27 21:08:45,992 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 93/100 | Train Loss: 0.0243 | Val rms_score: 0.6608
121
+ 2025-09-27 21:09:01,886 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 94/100 | Train Loss: 0.0231 | Val rms_score: 0.6623
122
+ 2025-09-27 21:09:15,198 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 95/100 | Train Loss: 0.0266 | Val rms_score: 0.6592
123
+ 2025-09-27 21:09:29,977 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 96/100 | Train Loss: 0.0271 | Val rms_score: 0.6645
124
+ 2025-09-27 21:09:46,907 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 97/100 | Train Loss: 0.0242 | Val rms_score: 0.6657
125
+ 2025-09-27 21:10:01,436 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 98/100 | Train Loss: 0.0254 | Val rms_score: 0.6652
126
+ 2025-09-27 21:10:17,015 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 99/100 | Train Loss: 0.0274 | Val rms_score: 0.6598
127
+ 2025-09-27 21:10:31,549 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 100/100 | Train Loss: 0.0248 | Val rms_score: 0.6656
128
+ 2025-09-27 21:10:32,536 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Test rms_score: 0.7563
129
+ 2025-09-27 21:10:32,874 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Starting triplicate run 2 for dataset astrazeneca_logd74 at 2025-09-27_21-10-32
130
+ 2025-09-27 21:10:45,857 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 1/100 | Train Loss: 0.4250 | Val rms_score: 0.7160
131
+ 2025-09-27 21:10:45,857 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Global step of best model: 210
132
+ 2025-09-27 21:10:46,574 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Best model saved at epoch 1 with val rms_score: 0.7160
133
+ 2025-09-27 21:10:59,220 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 2/100 | Train Loss: 0.2531 | Val rms_score: 0.7243
134
+ 2025-09-27 21:11:14,374 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 3/100 | Train Loss: 0.2208 | Val rms_score: 0.6949
135
+ 2025-09-27 21:11:14,604 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Global step of best model: 630
136
+ 2025-09-27 21:11:15,330 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Best model saved at epoch 3 with val rms_score: 0.6949
137
+ 2025-09-27 21:11:30,757 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 4/100 | Train Loss: 0.1828 | Val rms_score: 0.7050
138
+ 2025-09-27 21:11:49,862 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 5/100 | Train Loss: 0.1475 | Val rms_score: 0.6884
139
+ 2025-09-27 21:11:50,096 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Global step of best model: 1050
140
+ 2025-09-27 21:11:48,510 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Best model saved at epoch 5 with val rms_score: 0.6884
141
+ 2025-09-27 21:12:05,878 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 6/100 | Train Loss: 0.1500 | Val rms_score: 0.6980
142
+ 2025-09-27 21:12:22,689 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 7/100 | Train Loss: 0.1411 | Val rms_score: 0.7206
143
+ 2025-09-27 21:12:44,953 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 8/100 | Train Loss: 0.1133 | Val rms_score: 0.6831
144
+ 2025-09-27 21:12:45,183 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Global step of best model: 1680
145
+ 2025-09-27 21:12:43,846 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Best model saved at epoch 8 with val rms_score: 0.6831
146
+ 2025-09-27 21:13:03,327 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 9/100 | Train Loss: 0.1160 | Val rms_score: 0.6730
147
+ 2025-09-27 21:13:03,557 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Global step of best model: 1890
148
+ 2025-09-27 21:13:04,293 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Best model saved at epoch 9 with val rms_score: 0.6730
149
+ 2025-09-27 21:13:21,173 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 10/100 | Train Loss: 0.1056 | Val rms_score: 0.6785
150
+ 2025-09-27 21:13:37,194 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 11/100 | Train Loss: 0.0910 | Val rms_score: 0.6781
151
+ 2025-09-27 21:13:53,351 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 12/100 | Train Loss: 0.0820 | Val rms_score: 0.6654
152
+ 2025-09-27 21:13:53,547 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Global step of best model: 2520
153
+ 2025-09-27 21:13:54,353 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Best model saved at epoch 12 with val rms_score: 0.6654
154
+ 2025-09-27 21:14:10,272 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 13/100 | Train Loss: 0.0771 | Val rms_score: 0.6856
155
+ 2025-09-27 21:14:27,350 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 14/100 | Train Loss: 0.0762 | Val rms_score: 0.6706
156
+ 2025-09-27 21:14:42,470 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 15/100 | Train Loss: 0.0672 | Val rms_score: 0.6627
157
+ 2025-09-27 21:14:42,653 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Global step of best model: 3150
158
+ 2025-09-27 21:14:43,379 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Best model saved at epoch 15 with val rms_score: 0.6627
159
+ 2025-09-27 21:14:59,348 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 16/100 | Train Loss: 0.0703 | Val rms_score: 0.6749
160
+ 2025-09-27 21:15:13,102 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 17/100 | Train Loss: 0.0607 | Val rms_score: 0.6732
161
+ 2025-09-27 21:15:29,112 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 18/100 | Train Loss: 0.0664 | Val rms_score: 0.6760
162
+ 2025-09-27 21:15:42,821 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 19/100 | Train Loss: 0.0604 | Val rms_score: 0.6758
163
+ 2025-09-27 21:15:59,230 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 20/100 | Train Loss: 0.0628 | Val rms_score: 0.6640
164
+ 2025-09-27 21:16:12,676 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 21/100 | Train Loss: 0.0586 | Val rms_score: 0.6674
165
+ 2025-09-27 21:16:25,864 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 22/100 | Train Loss: 0.0551 | Val rms_score: 0.6608
166
+ 2025-09-27 21:16:26,061 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Global step of best model: 4620
167
+ 2025-09-27 21:16:26,766 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Best model saved at epoch 22 with val rms_score: 0.6608
168
+ 2025-09-27 21:16:41,874 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 23/100 | Train Loss: 0.0612 | Val rms_score: 0.6676
169
+ 2025-09-27 21:16:56,020 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 24/100 | Train Loss: 0.0574 | Val rms_score: 0.6884
170
+ 2025-09-27 21:17:11,275 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 25/100 | Train Loss: 0.0472 | Val rms_score: 0.6910
171
+ 2025-09-27 21:17:23,753 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 26/100 | Train Loss: 0.0503 | Val rms_score: 0.6666
172
+ 2025-09-27 21:17:39,607 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 27/100 | Train Loss: 0.0482 | Val rms_score: 0.6679
173
+ 2025-09-27 21:17:53,788 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 28/100 | Train Loss: 0.0443 | Val rms_score: 0.6783
174
+ 2025-09-27 21:18:09,998 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 29/100 | Train Loss: 0.0486 | Val rms_score: 0.6749
175
+ 2025-09-27 21:18:23,004 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 30/100 | Train Loss: 0.0444 | Val rms_score: 0.6663
176
+ 2025-09-27 21:18:38,044 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 31/100 | Train Loss: 0.0500 | Val rms_score: 0.6666
177
+ 2025-09-27 21:18:51,397 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 32/100 | Train Loss: 0.0367 | Val rms_score: 0.6687
178
+ 2025-09-27 21:19:06,175 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 33/100 | Train Loss: 0.0466 | Val rms_score: 0.6663
179
+ 2025-09-27 21:19:19,875 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 34/100 | Train Loss: 0.0449 | Val rms_score: 0.6665
180
+ 2025-09-27 21:19:34,913 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 35/100 | Train Loss: 0.0466 | Val rms_score: 0.6762
181
+ 2025-09-27 21:19:47,613 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 36/100 | Train Loss: 0.0427 | Val rms_score: 0.6688
182
+ 2025-09-27 21:20:03,123 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 37/100 | Train Loss: 0.0415 | Val rms_score: 0.6741
183
+ 2025-09-27 21:20:16,066 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 38/100 | Train Loss: 0.0404 | Val rms_score: 0.6720
184
+ 2025-09-27 21:20:31,858 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 39/100 | Train Loss: 0.0382 | Val rms_score: 0.6725
185
+ 2025-09-27 21:20:45,847 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 40/100 | Train Loss: 0.0386 | Val rms_score: 0.6705
186
+ 2025-09-27 21:21:03,264 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 41/100 | Train Loss: 0.0418 | Val rms_score: 0.6686
187
+ 2025-09-27 21:21:18,026 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 42/100 | Train Loss: 0.0395 | Val rms_score: 0.6782
188
+ 2025-09-27 21:21:32,331 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 43/100 | Train Loss: 0.0411 | Val rms_score: 0.6687
189
+ 2025-09-27 21:21:48,836 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 44/100 | Train Loss: 0.0389 | Val rms_score: 0.6717
190
+ 2025-09-27 21:22:03,300 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 45/100 | Train Loss: 0.0389 | Val rms_score: 0.6650
191
+ 2025-09-27 21:22:18,548 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 46/100 | Train Loss: 0.0339 | Val rms_score: 0.6742
192
+ 2025-09-27 21:22:32,549 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 47/100 | Train Loss: 0.0357 | Val rms_score: 0.6653
193
+ 2025-09-27 21:22:49,885 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 48/100 | Train Loss: 0.0371 | Val rms_score: 0.6716
194
+ 2025-09-27 21:23:04,470 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 49/100 | Train Loss: 0.0339 | Val rms_score: 0.6806
195
+ 2025-09-27 21:23:20,793 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 50/100 | Train Loss: 0.0355 | Val rms_score: 0.6705
196
+ 2025-09-27 21:23:33,842 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 51/100 | Train Loss: 0.0441 | Val rms_score: 0.6657
197
+ 2025-09-27 21:23:49,410 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 52/100 | Train Loss: 0.0326 | Val rms_score: 0.6695
198
+ 2025-09-27 21:24:03,844 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 53/100 | Train Loss: 0.0323 | Val rms_score: 0.6653
199
+ 2025-09-27 21:24:17,480 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 54/100 | Train Loss: 0.0365 | Val rms_score: 0.6749
200
+ 2025-09-27 21:24:33,196 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 55/100 | Train Loss: 0.0333 | Val rms_score: 0.6736
201
+ 2025-09-27 21:24:51,349 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 56/100 | Train Loss: 0.0336 | Val rms_score: 0.6791
202
+ 2025-09-27 21:25:07,929 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 57/100 | Train Loss: 0.0339 | Val rms_score: 0.6696
203
+ 2025-09-27 21:25:21,329 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 58/100 | Train Loss: 0.0314 | Val rms_score: 0.6737
204
+ 2025-09-27 21:25:35,289 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 59/100 | Train Loss: 0.0335 | Val rms_score: 0.6725
205
+ 2025-09-27 21:25:47,363 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 60/100 | Train Loss: 0.0320 | Val rms_score: 0.6683
206
+ 2025-09-27 21:26:01,488 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 61/100 | Train Loss: 0.0273 | Val rms_score: 0.6710
207
+ 2025-09-27 21:26:14,211 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 62/100 | Train Loss: 0.0318 | Val rms_score: 0.6659
208
+ 2025-09-27 21:26:28,078 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 63/100 | Train Loss: 0.0309 | Val rms_score: 0.6698
209
+ 2025-09-27 21:26:39,584 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 64/100 | Train Loss: 0.0309 | Val rms_score: 0.6717
210
+ 2025-09-27 21:26:53,428 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 65/100 | Train Loss: 0.0284 | Val rms_score: 0.6750
211
+ 2025-09-27 21:27:04,966 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 66/100 | Train Loss: 0.0318 | Val rms_score: 0.6659
212
+ 2025-09-27 21:27:20,142 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 67/100 | Train Loss: 0.0306 | Val rms_score: 0.6765
213
+ 2025-09-27 21:27:31,740 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 68/100 | Train Loss: 0.0301 | Val rms_score: 0.6704
214
+ 2025-09-27 21:27:45,393 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 69/100 | Train Loss: 0.0295 | Val rms_score: 0.6700
215
+ 2025-09-27 21:27:57,477 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 70/100 | Train Loss: 0.0298 | Val rms_score: 0.6683
216
+ 2025-09-27 21:28:11,374 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 71/100 | Train Loss: 0.0332 | Val rms_score: 0.6725
217
+ 2025-09-27 21:28:28,784 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 72/100 | Train Loss: 0.0328 | Val rms_score: 0.6664
218
+ 2025-09-27 21:28:40,038 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 73/100 | Train Loss: 0.0311 | Val rms_score: 0.6646
219
+ 2025-09-27 21:28:56,575 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 74/100 | Train Loss: 0.0268 | Val rms_score: 0.6762
220
+ 2025-09-27 21:29:07,137 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 75/100 | Train Loss: 0.0281 | Val rms_score: 0.6741
221
+ 2025-09-27 21:29:22,032 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 76/100 | Train Loss: 0.0297 | Val rms_score: 0.6639
222
+ 2025-09-27 21:29:35,609 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 77/100 | Train Loss: 0.0324 | Val rms_score: 0.6684
223
+ 2025-09-27 21:29:48,310 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 78/100 | Train Loss: 0.0287 | Val rms_score: 0.6691
224
+ 2025-09-27 21:30:02,729 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 79/100 | Train Loss: 0.0286 | Val rms_score: 0.6653
225
+ 2025-09-27 21:30:17,090 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 80/100 | Train Loss: 0.0278 | Val rms_score: 0.6671
226
+ 2025-09-27 21:30:29,752 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 81/100 | Train Loss: 0.0245 | Val rms_score: 0.6721
227
+ 2025-09-27 21:30:43,388 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 82/100 | Train Loss: 0.0277 | Val rms_score: 0.6640
228
+ 2025-09-27 21:30:55,456 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 83/100 | Train Loss: 0.0289 | Val rms_score: 0.6650
229
+ 2025-09-27 21:31:10,294 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 84/100 | Train Loss: 0.0271 | Val rms_score: 0.6761
230
+ 2025-09-27 21:31:22,315 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 85/100 | Train Loss: 0.0275 | Val rms_score: 0.6675
231
+ 2025-09-27 21:31:36,445 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 86/100 | Train Loss: 0.0271 | Val rms_score: 0.6651
232
+ 2025-09-27 21:31:48,125 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 87/100 | Train Loss: 0.0277 | Val rms_score: 0.6598
233
+ 2025-09-27 21:31:48,312 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Global step of best model: 18270
234
+ 2025-09-27 21:31:49,138 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Best model saved at epoch 87 with val rms_score: 0.6598
235
+ 2025-09-27 21:32:03,226 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 88/100 | Train Loss: 0.0279 | Val rms_score: 0.6656
236
+ 2025-09-27 21:32:14,941 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 89/100 | Train Loss: 0.0273 | Val rms_score: 0.6687
237
+ 2025-09-27 21:32:28,959 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 90/100 | Train Loss: 0.0262 | Val rms_score: 0.6649
238
+ 2025-09-27 21:32:41,596 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 91/100 | Train Loss: 0.0273 | Val rms_score: 0.6649
239
+ 2025-09-27 21:32:56,670 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 92/100 | Train Loss: 0.0309 | Val rms_score: 0.6650
240
+ 2025-09-27 21:33:08,491 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 93/100 | Train Loss: 0.0302 | Val rms_score: 0.6665
241
+ 2025-09-27 21:33:22,063 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 94/100 | Train Loss: 0.0279 | Val rms_score: 0.6656
242
+ 2025-09-27 21:33:33,588 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 95/100 | Train Loss: 0.0283 | Val rms_score: 0.6657
243
+ 2025-09-27 21:33:49,282 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 96/100 | Train Loss: 0.0255 | Val rms_score: 0.6605
244
+ 2025-09-27 21:34:02,021 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 97/100 | Train Loss: 0.0260 | Val rms_score: 0.6665
245
+ 2025-09-27 21:34:16,295 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 98/100 | Train Loss: 0.0260 | Val rms_score: 0.6652
246
+ 2025-09-27 21:34:28,243 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 99/100 | Train Loss: 0.0259 | Val rms_score: 0.6600
247
+ 2025-09-27 21:34:43,227 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 100/100 | Train Loss: 0.0280 | Val rms_score: 0.6619
248
+ 2025-09-27 21:34:44,157 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Test rms_score: 0.7568
249
+ 2025-09-27 21:34:44,540 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Starting triplicate run 3 for dataset astrazeneca_logd74 at 2025-09-27_21-34-44
250
+ 2025-09-27 21:34:54,755 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 1/100 | Train Loss: 0.3344 | Val rms_score: 0.7121
251
+ 2025-09-27 21:34:54,755 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Global step of best model: 210
252
+ 2025-09-27 21:34:55,416 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Best model saved at epoch 1 with val rms_score: 0.7121
253
+ 2025-09-27 21:35:09,846 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 2/100 | Train Loss: 0.2969 | Val rms_score: 0.7051
254
+ 2025-09-27 21:35:10,034 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Global step of best model: 420
255
+ 2025-09-27 21:35:10,687 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Best model saved at epoch 2 with val rms_score: 0.7051
256
+ 2025-09-27 21:35:23,002 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 3/100 | Train Loss: 0.2135 | Val rms_score: 0.7345
257
+ 2025-09-27 21:35:37,718 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 4/100 | Train Loss: 0.1742 | Val rms_score: 0.6980
258
+ 2025-09-27 21:35:37,920 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Global step of best model: 840
259
+ 2025-09-27 21:35:38,621 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Best model saved at epoch 4 with val rms_score: 0.6980
260
+ 2025-09-27 21:35:51,958 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 5/100 | Train Loss: 0.1688 | Val rms_score: 0.6933
261
+ 2025-09-27 21:35:52,157 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Global step of best model: 1050
262
+ 2025-09-27 21:35:52,851 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Best model saved at epoch 5 with val rms_score: 0.6933
263
+ 2025-09-27 21:36:06,618 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 6/100 | Train Loss: 0.1437 | Val rms_score: 0.6795
264
+ 2025-09-27 21:36:07,177 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Global step of best model: 1260
265
+ 2025-09-27 21:36:07,844 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Best model saved at epoch 6 with val rms_score: 0.6795
266
+ 2025-09-27 21:36:19,511 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 7/100 | Train Loss: 0.1313 | Val rms_score: 0.6869
267
+ 2025-09-27 21:36:33,124 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 8/100 | Train Loss: 0.1187 | Val rms_score: 0.6715
268
+ 2025-09-27 21:36:33,321 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Global step of best model: 1680
269
+ 2025-09-27 21:36:33,980 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Best model saved at epoch 8 with val rms_score: 0.6715
270
+ 2025-09-27 21:36:45,226 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 9/100 | Train Loss: 0.1021 | Val rms_score: 0.6546
271
+ 2025-09-27 21:36:45,423 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Global step of best model: 1890
272
+ 2025-09-27 21:36:46,071 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Best model saved at epoch 9 with val rms_score: 0.6546
273
+ 2025-09-27 21:37:00,446 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 10/100 | Train Loss: 0.0919 | Val rms_score: 0.6710
274
+ 2025-09-27 21:37:13,435 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 11/100 | Train Loss: 0.1023 | Val rms_score: 0.6805
275
+ 2025-09-27 21:37:30,182 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 12/100 | Train Loss: 0.1195 | Val rms_score: 0.6892
276
+ 2025-09-27 21:37:41,476 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 13/100 | Train Loss: 0.0807 | Val rms_score: 0.6751
277
+ 2025-09-27 21:37:55,407 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 14/100 | Train Loss: 0.0816 | Val rms_score: 0.6763
278
+ 2025-09-27 21:38:07,917 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 15/100 | Train Loss: 0.0741 | Val rms_score: 0.6745
279
+ 2025-09-27 21:38:22,204 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 16/100 | Train Loss: 0.0682 | Val rms_score: 0.6678
280
+ 2025-09-27 21:38:35,661 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 17/100 | Train Loss: 0.0696 | Val rms_score: 0.6731
281
+ 2025-09-27 21:38:50,250 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 18/100 | Train Loss: 0.0664 | Val rms_score: 0.6712
282
+ 2025-09-27 21:39:02,607 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 19/100 | Train Loss: 0.0608 | Val rms_score: 0.6782
283
+ 2025-09-27 21:39:17,297 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 20/100 | Train Loss: 0.0631 | Val rms_score: 0.6791
284
+ 2025-09-27 21:39:30,615 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 21/100 | Train Loss: 0.0680 | Val rms_score: 0.6827
285
+ 2025-09-27 21:39:42,232 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 22/100 | Train Loss: 0.0582 | Val rms_score: 0.6705
286
+ 2025-09-27 21:39:56,486 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 23/100 | Train Loss: 0.0526 | Val rms_score: 0.6847
287
+ 2025-09-27 21:40:08,746 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 24/100 | Train Loss: 0.0535 | Val rms_score: 0.6840
288
+ 2025-09-27 21:40:22,110 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 25/100 | Train Loss: 0.0634 | Val rms_score: 0.6742
289
+ 2025-09-27 21:40:33,997 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 26/100 | Train Loss: 0.0477 | Val rms_score: 0.6706
290
+ 2025-09-27 21:40:49,233 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 27/100 | Train Loss: 0.0513 | Val rms_score: 0.6798
291
+ 2025-09-27 21:41:00,287 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 28/100 | Train Loss: 0.0482 | Val rms_score: 0.6730
292
+ 2025-09-27 21:41:14,964 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 29/100 | Train Loss: 0.0497 | Val rms_score: 0.6795
293
+ 2025-09-27 21:41:26,345 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 30/100 | Train Loss: 0.0459 | Val rms_score: 0.6730
294
+ 2025-09-27 21:41:40,897 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 31/100 | Train Loss: 0.0445 | Val rms_score: 0.6773
295
+ 2025-09-27 21:41:52,823 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 32/100 | Train Loss: 0.0469 | Val rms_score: 0.6738
296
+ 2025-09-27 21:42:06,521 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 33/100 | Train Loss: 0.0432 | Val rms_score: 0.6704
297
+ 2025-09-27 21:42:18,457 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 34/100 | Train Loss: 0.0461 | Val rms_score: 0.6892
298
+ 2025-09-27 21:42:31,347 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 35/100 | Train Loss: 0.0403 | Val rms_score: 0.6772
299
+ 2025-09-27 21:42:44,389 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 36/100 | Train Loss: 0.0411 | Val rms_score: 0.6713
300
+ 2025-09-27 21:42:55,457 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 37/100 | Train Loss: 0.0408 | Val rms_score: 0.6746
301
+ 2025-09-27 21:43:08,365 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 38/100 | Train Loss: 0.0426 | Val rms_score: 0.6732
302
+ 2025-09-27 21:43:20,082 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 39/100 | Train Loss: 0.0372 | Val rms_score: 0.6737
303
+ 2025-09-27 21:43:33,721 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 40/100 | Train Loss: 0.0395 | Val rms_score: 0.6681
304
+ 2025-09-27 21:43:45,762 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 41/100 | Train Loss: 0.0605 | Val rms_score: 0.6741
305
+ 2025-09-27 21:44:01,089 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 42/100 | Train Loss: 0.0389 | Val rms_score: 0.6694
306
+ 2025-09-27 21:44:14,826 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 43/100 | Train Loss: 0.0331 | Val rms_score: 0.6720
307
+ 2025-09-27 21:44:29,971 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 44/100 | Train Loss: 0.0355 | Val rms_score: 0.6682
308
+ 2025-09-27 21:44:41,767 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 45/100 | Train Loss: 0.0366 | Val rms_score: 0.6724
309
+ 2025-09-27 21:44:56,303 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 46/100 | Train Loss: 0.0354 | Val rms_score: 0.6747
310
+ 2025-09-27 21:45:08,923 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 47/100 | Train Loss: 0.0339 | Val rms_score: 0.6810
311
+ 2025-09-27 21:45:23,762 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 48/100 | Train Loss: 0.0344 | Val rms_score: 0.6804
312
+ 2025-09-27 21:45:34,814 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 49/100 | Train Loss: 0.0347 | Val rms_score: 0.6706
313
+ 2025-09-27 21:45:49,402 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 50/100 | Train Loss: 0.0336 | Val rms_score: 0.6727
314
+ 2025-09-27 21:46:01,151 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 51/100 | Train Loss: 0.0344 | Val rms_score: 0.6746
315
+ 2025-09-27 21:46:15,302 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 52/100 | Train Loss: 0.0367 | Val rms_score: 0.6816
316
+ 2025-09-27 21:46:28,355 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 53/100 | Train Loss: 0.0365 | Val rms_score: 0.6728
317
+ 2025-09-27 21:46:41,856 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 54/100 | Train Loss: 0.0330 | Val rms_score: 0.6715
318
+ 2025-09-27 21:46:55,366 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 55/100 | Train Loss: 0.0352 | Val rms_score: 0.6733
319
+ 2025-09-27 21:47:10,995 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 56/100 | Train Loss: 0.0331 | Val rms_score: 0.6712
320
+ 2025-09-27 21:47:25,827 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 57/100 | Train Loss: 0.0304 | Val rms_score: 0.6696
321
+ 2025-09-27 21:47:41,180 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 58/100 | Train Loss: 0.0314 | Val rms_score: 0.6744
322
+ 2025-09-27 21:47:52,622 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 59/100 | Train Loss: 0.0307 | Val rms_score: 0.6768
323
+ 2025-09-27 21:48:06,536 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 60/100 | Train Loss: 0.0312 | Val rms_score: 0.6696
324
+ 2025-09-27 21:48:17,705 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 61/100 | Train Loss: 0.0297 | Val rms_score: 0.6743
325
+ 2025-09-27 21:48:32,368 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 62/100 | Train Loss: 0.0305 | Val rms_score: 0.6743
326
+ 2025-09-27 21:48:47,233 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 63/100 | Train Loss: 0.0346 | Val rms_score: 0.6746
327
+ 2025-09-27 21:48:56,965 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 64/100 | Train Loss: 0.0311 | Val rms_score: 0.6710
328
+ 2025-09-27 21:49:10,744 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 65/100 | Train Loss: 0.0328 | Val rms_score: 0.6705
329
+ 2025-09-27 21:49:22,961 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 66/100 | Train Loss: 0.0319 | Val rms_score: 0.6680
330
+ 2025-09-27 21:49:38,622 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 67/100 | Train Loss: 0.0337 | Val rms_score: 0.6696
331
+ 2025-09-27 21:49:49,742 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 68/100 | Train Loss: 0.0299 | Val rms_score: 0.6722
332
+ 2025-09-27 21:50:03,091 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 69/100 | Train Loss: 0.0309 | Val rms_score: 0.6732
333
+ 2025-09-27 21:50:14,651 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 70/100 | Train Loss: 0.0286 | Val rms_score: 0.6733
334
+ 2025-09-27 21:50:27,968 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 71/100 | Train Loss: 0.0309 | Val rms_score: 0.6769
335
+ 2025-09-27 21:50:41,269 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 72/100 | Train Loss: 0.0275 | Val rms_score: 0.6776
336
+ 2025-09-27 21:50:56,263 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 73/100 | Train Loss: 0.0314 | Val rms_score: 0.6749
337
+ 2025-09-27 21:51:07,783 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 74/100 | Train Loss: 0.0303 | Val rms_score: 0.6720
338
+ 2025-09-27 21:51:21,290 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 75/100 | Train Loss: 0.0297 | Val rms_score: 0.6752
339
+ 2025-09-27 21:51:33,497 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 76/100 | Train Loss: 0.0307 | Val rms_score: 0.6707
340
+ 2025-09-27 21:51:49,858 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 77/100 | Train Loss: 0.0295 | Val rms_score: 0.6744
341
+ 2025-09-27 21:52:02,046 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 78/100 | Train Loss: 0.0293 | Val rms_score: 0.6769
342
+ 2025-09-27 21:52:16,488 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 79/100 | Train Loss: 0.0285 | Val rms_score: 0.6753
343
+ 2025-09-27 21:52:28,632 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 80/100 | Train Loss: 0.0297 | Val rms_score: 0.6680
344
+ 2025-09-27 21:52:44,103 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 81/100 | Train Loss: 0.0318 | Val rms_score: 0.6702
345
+ 2025-09-27 21:52:55,521 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 82/100 | Train Loss: 0.0249 | Val rms_score: 0.6699
346
+ 2025-09-27 21:53:08,757 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 83/100 | Train Loss: 0.0276 | Val rms_score: 0.6695
347
+ 2025-09-27 21:53:20,603 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 84/100 | Train Loss: 0.0260 | Val rms_score: 0.6711
348
+ 2025-09-27 21:53:35,311 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 85/100 | Train Loss: 0.0294 | Val rms_score: 0.6774
349
+ 2025-09-27 21:53:49,504 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 86/100 | Train Loss: 0.0286 | Val rms_score: 0.6713
350
+ 2025-09-27 21:54:00,859 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 87/100 | Train Loss: 0.0292 | Val rms_score: 0.6678
351
+ 2025-09-27 21:54:14,039 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 88/100 | Train Loss: 0.0270 | Val rms_score: 0.6730
352
+ 2025-09-27 21:54:25,052 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 89/100 | Train Loss: 0.0273 | Val rms_score: 0.6701
353
+ 2025-09-27 21:54:38,296 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 90/100 | Train Loss: 0.0273 | Val rms_score: 0.6653
354
+ 2025-09-27 21:54:50,254 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 91/100 | Train Loss: 0.0270 | Val rms_score: 0.6739
355
+ 2025-09-27 21:55:05,142 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 92/100 | Train Loss: 0.0273 | Val rms_score: 0.6717
356
+ 2025-09-27 21:55:18,895 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 93/100 | Train Loss: 0.0289 | Val rms_score: 0.6748
357
+ 2025-09-27 21:55:34,778 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 94/100 | Train Loss: 0.0273 | Val rms_score: 0.6695
358
+ 2025-09-27 21:55:47,572 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 95/100 | Train Loss: 0.0289 | Val rms_score: 0.6698
359
+ 2025-09-27 21:56:02,482 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 96/100 | Train Loss: 0.0237 | Val rms_score: 0.6680
360
+ 2025-09-27 21:56:13,877 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 97/100 | Train Loss: 0.0272 | Val rms_score: 0.6650
361
+ 2025-09-27 21:56:28,078 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 98/100 | Train Loss: 0.0275 | Val rms_score: 0.6725
362
+ 2025-09-27 21:56:40,382 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 99/100 | Train Loss: 0.0260 | Val rms_score: 0.6731
363
+ 2025-09-27 21:56:56,083 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Epoch 100/100 | Train Loss: 0.0250 | Val rms_score: 0.6663
364
+ 2025-09-27 21:56:57,092 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Test rms_score: 0.7658
365
+ 2025-09-27 21:56:57,497 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size16 - INFO - Final Triplicate Test Results — Avg rms_score: 0.7596, Std Dev: 0.0044
logs_modchembert_regression_ModChemBERT-MLM-DAPT-TAFT-OPT/modchembert_deepchem_splits_run_astrazeneca_ppb_epochs100_batch_size32_20250927_114432.log ADDED
@@ -0,0 +1,391 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ 2025-09-27 11:44:32,427 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Running benchmark for dataset: astrazeneca_ppb
2
+ 2025-09-27 11:44:32,428 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - dataset: astrazeneca_ppb, tasks: ['y'], epochs: 100, learning rate: 1e-05, transform: True
3
+ 2025-09-27 11:44:32,434 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Starting triplicate run 1 for dataset astrazeneca_ppb at 2025-09-27_11-44-32
4
+ 2025-09-27 11:44:38,400 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 1/100 | Train Loss: 0.9556 | Val rms_score: 0.1354
5
+ 2025-09-27 11:44:38,400 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Global step of best model: 45
6
+ 2025-09-27 11:44:39,099 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Best model saved at epoch 1 with val rms_score: 0.1354
7
+ 2025-09-27 11:44:43,643 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 2/100 | Train Loss: 0.8278 | Val rms_score: 0.1284
8
+ 2025-09-27 11:44:43,845 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Global step of best model: 90
9
+ 2025-09-27 11:44:44,514 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Best model saved at epoch 2 with val rms_score: 0.1284
10
+ 2025-09-27 11:44:50,909 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 3/100 | Train Loss: 0.6000 | Val rms_score: 0.1233
11
+ 2025-09-27 11:44:51,116 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Global step of best model: 135
12
+ 2025-09-27 11:44:51,744 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Best model saved at epoch 3 with val rms_score: 0.1233
13
+ 2025-09-27 11:44:58,168 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 4/100 | Train Loss: 0.6972 | Val rms_score: 0.1196
14
+ 2025-09-27 11:44:58,550 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Global step of best model: 180
15
+ 2025-09-27 11:44:59,466 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Best model saved at epoch 4 with val rms_score: 0.1196
16
+ 2025-09-27 11:45:05,895 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 5/100 | Train Loss: 0.6850 | Val rms_score: 0.1169
17
+ 2025-09-27 11:45:06,106 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Global step of best model: 225
18
+ 2025-09-27 11:45:06,765 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Best model saved at epoch 5 with val rms_score: 0.1169
19
+ 2025-09-27 11:45:10,057 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 6/100 | Train Loss: 0.6139 | Val rms_score: 0.1149
20
+ 2025-09-27 11:45:10,830 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Global step of best model: 270
21
+ 2025-09-27 11:45:11,588 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Best model saved at epoch 6 with val rms_score: 0.1149
22
+ 2025-09-27 11:45:18,059 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 7/100 | Train Loss: 0.6375 | Val rms_score: 0.1134
23
+ 2025-09-27 11:45:18,267 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Global step of best model: 315
24
+ 2025-09-27 11:45:18,910 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Best model saved at epoch 7 with val rms_score: 0.1134
25
+ 2025-09-27 11:45:24,377 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 8/100 | Train Loss: 0.5472 | Val rms_score: 0.1123
26
+ 2025-09-27 11:45:24,586 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Global step of best model: 360
27
+ 2025-09-27 11:45:25,227 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Best model saved at epoch 8 with val rms_score: 0.1123
28
+ 2025-09-27 11:45:31,528 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 9/100 | Train Loss: 0.6000 | Val rms_score: 0.1119
29
+ 2025-09-27 11:45:31,736 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Global step of best model: 405
30
+ 2025-09-27 11:45:32,371 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Best model saved at epoch 9 with val rms_score: 0.1119
31
+ 2025-09-27 11:45:34,942 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 10/100 | Train Loss: 0.4667 | Val rms_score: 0.1119
32
+ 2025-09-27 11:45:35,148 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Global step of best model: 450
33
+ 2025-09-27 11:45:35,776 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Best model saved at epoch 10 with val rms_score: 0.1119
34
+ 2025-09-27 11:45:42,077 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 11/100 | Train Loss: 0.4556 | Val rms_score: 0.1111
35
+ 2025-09-27 11:45:42,810 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Global step of best model: 495
36
+ 2025-09-27 11:45:43,449 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Best model saved at epoch 11 with val rms_score: 0.1111
37
+ 2025-09-27 11:45:48,946 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 12/100 | Train Loss: 0.4594 | Val rms_score: 0.1107
38
+ 2025-09-27 11:45:49,145 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Global step of best model: 540
39
+ 2025-09-27 11:45:49,775 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Best model saved at epoch 12 with val rms_score: 0.1107
40
+ 2025-09-27 11:45:56,131 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 13/100 | Train Loss: 0.4167 | Val rms_score: 0.1112
41
+ 2025-09-27 11:46:01,983 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 14/100 | Train Loss: 0.3708 | Val rms_score: 0.1108
42
+ 2025-09-27 11:46:05,500 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 15/100 | Train Loss: 0.3583 | Val rms_score: 0.1098
43
+ 2025-09-27 11:46:05,724 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Global step of best model: 675
44
+ 2025-09-27 11:46:06,426 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Best model saved at epoch 15 with val rms_score: 0.1098
45
+ 2025-09-27 11:46:12,245 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 16/100 | Train Loss: 0.3234 | Val rms_score: 0.1100
46
+ 2025-09-27 11:46:19,048 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 17/100 | Train Loss: 0.3361 | Val rms_score: 0.1108
47
+ 2025-09-27 11:46:25,202 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 18/100 | Train Loss: 0.3422 | Val rms_score: 0.1103
48
+ 2025-09-27 11:46:28,659 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 19/100 | Train Loss: 0.3069 | Val rms_score: 0.1098
49
+ 2025-09-27 11:46:35,535 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 20/100 | Train Loss: 0.3000 | Val rms_score: 0.1097
50
+ 2025-09-27 11:46:35,738 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Global step of best model: 900
51
+ 2025-09-27 11:46:36,404 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Best model saved at epoch 20 with val rms_score: 0.1097
52
+ 2025-09-27 11:46:42,816 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 21/100 | Train Loss: 0.2861 | Val rms_score: 0.1100
53
+ 2025-09-27 11:46:49,743 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 22/100 | Train Loss: 0.2694 | Val rms_score: 0.1114
54
+ 2025-09-27 11:46:56,975 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 23/100 | Train Loss: 0.2714 | Val rms_score: 0.1117
55
+ 2025-09-27 11:47:00,786 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 24/100 | Train Loss: 0.2542 | Val rms_score: 0.1107
56
+ 2025-09-27 11:47:07,183 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 25/100 | Train Loss: 0.2288 | Val rms_score: 0.1108
57
+ 2025-09-27 11:47:12,865 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 26/100 | Train Loss: 0.2361 | Val rms_score: 0.1103
58
+ 2025-09-27 11:47:18,973 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 27/100 | Train Loss: 0.2396 | Val rms_score: 0.1107
59
+ 2025-09-27 11:47:22,579 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 28/100 | Train Loss: 0.2208 | Val rms_score: 0.1111
60
+ 2025-09-27 11:47:28,438 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 29/100 | Train Loss: 0.1797 | Val rms_score: 0.1110
61
+ 2025-09-27 11:47:34,803 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 30/100 | Train Loss: 0.2167 | Val rms_score: 0.1110
62
+ 2025-09-27 11:47:41,155 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 31/100 | Train Loss: 0.2153 | Val rms_score: 0.1109
63
+ 2025-09-27 11:47:47,497 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 32/100 | Train Loss: 0.2156 | Val rms_score: 0.1112
64
+ 2025-09-27 11:47:50,955 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 33/100 | Train Loss: 0.2000 | Val rms_score: 0.1112
65
+ 2025-09-27 11:47:56,355 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 34/100 | Train Loss: 0.1948 | Val rms_score: 0.1112
66
+ 2025-09-27 11:48:02,573 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 35/100 | Train Loss: 0.1986 | Val rms_score: 0.1117
67
+ 2025-09-27 11:48:08,137 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 36/100 | Train Loss: 0.1852 | Val rms_score: 0.1117
68
+ 2025-09-27 11:48:15,421 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 37/100 | Train Loss: 0.1861 | Val rms_score: 0.1115
69
+ 2025-09-27 11:48:18,190 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 38/100 | Train Loss: 0.2266 | Val rms_score: 0.1112
70
+ 2025-09-27 11:48:24,515 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 39/100 | Train Loss: 0.1875 | Val rms_score: 0.1110
71
+ 2025-09-27 11:48:30,549 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 40/100 | Train Loss: 0.1819 | Val rms_score: 0.1119
72
+ 2025-09-27 11:48:36,704 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 41/100 | Train Loss: 0.1903 | Val rms_score: 0.1120
73
+ 2025-09-27 11:48:43,251 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 42/100 | Train Loss: 0.1806 | Val rms_score: 0.1112
74
+ 2025-09-27 11:48:45,637 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 43/100 | Train Loss: 0.1714 | Val rms_score: 0.1111
75
+ 2025-09-27 11:48:51,875 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 44/100 | Train Loss: 0.1667 | Val rms_score: 0.1116
76
+ 2025-09-27 11:48:58,803 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 45/100 | Train Loss: 0.1581 | Val rms_score: 0.1119
77
+ 2025-09-27 11:49:05,240 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 46/100 | Train Loss: 0.1604 | Val rms_score: 0.1115
78
+ 2025-09-27 11:49:11,465 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 47/100 | Train Loss: 0.1604 | Val rms_score: 0.1116
79
+ 2025-09-27 11:49:15,289 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 48/100 | Train Loss: 0.1653 | Val rms_score: 0.1127
80
+ 2025-09-27 11:49:21,299 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 49/100 | Train Loss: 0.2094 | Val rms_score: 0.1120
81
+ 2025-09-27 11:49:27,522 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 50/100 | Train Loss: 0.1590 | Val rms_score: 0.1119
82
+ 2025-09-27 11:49:33,675 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 51/100 | Train Loss: 0.1583 | Val rms_score: 0.1120
83
+ 2025-09-27 11:49:40,189 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 52/100 | Train Loss: 0.1625 | Val rms_score: 0.1123
84
+ 2025-09-27 11:49:43,619 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 53/100 | Train Loss: 0.1493 | Val rms_score: 0.1124
85
+ 2025-09-27 11:49:48,432 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 54/100 | Train Loss: 0.1542 | Val rms_score: 0.1124
86
+ 2025-09-27 11:49:54,956 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 55/100 | Train Loss: 0.1535 | Val rms_score: 0.1130
87
+ 2025-09-27 11:50:00,620 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 56/100 | Train Loss: 0.1570 | Val rms_score: 0.1136
88
+ 2025-09-27 11:50:07,095 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 57/100 | Train Loss: 0.1472 | Val rms_score: 0.1140
89
+ 2025-09-27 11:50:10,502 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 58/100 | Train Loss: 0.1336 | Val rms_score: 0.1131
90
+ 2025-09-27 11:50:16,396 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 59/100 | Train Loss: 0.1486 | Val rms_score: 0.1129
91
+ 2025-09-27 11:50:22,630 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 60/100 | Train Loss: 0.1521 | Val rms_score: 0.1144
92
+ 2025-09-27 11:50:27,873 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 61/100 | Train Loss: 0.1528 | Val rms_score: 0.1129
93
+ 2025-09-27 11:50:34,589 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 62/100 | Train Loss: 0.1396 | Val rms_score: 0.1127
94
+ 2025-09-27 11:50:37,196 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 63/100 | Train Loss: 0.1348 | Val rms_score: 0.1127
95
+ 2025-09-27 11:50:43,521 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 64/100 | Train Loss: 0.1486 | Val rms_score: 0.1129
96
+ 2025-09-27 11:50:49,392 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 65/100 | Train Loss: 0.1313 | Val rms_score: 0.1138
97
+ 2025-09-27 11:50:55,486 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 66/100 | Train Loss: 0.1417 | Val rms_score: 0.1132
98
+ 2025-09-27 11:51:00,336 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 67/100 | Train Loss: 0.1531 | Val rms_score: 0.1136
99
+ 2025-09-27 11:51:06,998 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 68/100 | Train Loss: 0.1431 | Val rms_score: 0.1131
100
+ 2025-09-27 11:51:13,357 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 69/100 | Train Loss: 0.1109 | Val rms_score: 0.1131
101
+ 2025-09-27 11:51:19,255 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 70/100 | Train Loss: 0.1410 | Val rms_score: 0.1133
102
+ 2025-09-27 11:51:25,477 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 71/100 | Train Loss: 0.1347 | Val rms_score: 0.1135
103
+ 2025-09-27 11:51:29,031 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 72/100 | Train Loss: 0.1391 | Val rms_score: 0.1132
104
+ 2025-09-27 11:51:34,790 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 73/100 | Train Loss: 0.1361 | Val rms_score: 0.1131
105
+ 2025-09-27 11:51:40,438 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 74/100 | Train Loss: 0.1385 | Val rms_score: 0.1128
106
+ 2025-09-27 11:51:46,647 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 75/100 | Train Loss: 0.1375 | Val rms_score: 0.1128
107
+ 2025-09-27 11:51:52,838 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 76/100 | Train Loss: 0.1383 | Val rms_score: 0.1130
108
+ 2025-09-27 11:51:57,002 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 77/100 | Train Loss: 0.1361 | Val rms_score: 0.1127
109
+ 2025-09-27 11:52:03,663 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 78/100 | Train Loss: 0.1391 | Val rms_score: 0.1127
110
+ 2025-09-27 11:52:09,828 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 79/100 | Train Loss: 0.1278 | Val rms_score: 0.1127
111
+ 2025-09-27 11:52:16,411 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 80/100 | Train Loss: 0.1299 | Val rms_score: 0.1131
112
+ 2025-09-27 11:52:22,135 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 81/100 | Train Loss: 0.1292 | Val rms_score: 0.1128
113
+ 2025-09-27 11:52:26,343 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 82/100 | Train Loss: 0.1347 | Val rms_score: 0.1133
114
+ 2025-09-27 11:52:32,352 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 83/100 | Train Loss: 0.1250 | Val rms_score: 0.1138
115
+ 2025-09-27 11:52:38,800 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 84/100 | Train Loss: 0.1215 | Val rms_score: 0.1134
116
+ 2025-09-27 11:52:45,472 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 85/100 | Train Loss: 0.1425 | Val rms_score: 0.1131
117
+ 2025-09-27 11:52:51,892 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 86/100 | Train Loss: 0.1243 | Val rms_score: 0.1130
118
+ 2025-09-27 11:52:55,920 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 87/100 | Train Loss: 0.1276 | Val rms_score: 0.1135
119
+ 2025-09-27 11:53:01,531 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 88/100 | Train Loss: 0.1375 | Val rms_score: 0.1127
120
+ 2025-09-27 11:53:08,166 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 89/100 | Train Loss: 0.1227 | Val rms_score: 0.1132
121
+ 2025-09-27 11:53:13,738 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 90/100 | Train Loss: 0.1278 | Val rms_score: 0.1132
122
+ 2025-09-27 11:53:17,153 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 91/100 | Train Loss: 0.1285 | Val rms_score: 0.1131
123
+ 2025-09-27 11:53:23,595 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 92/100 | Train Loss: 0.1250 | Val rms_score: 0.1129
124
+ 2025-09-27 11:53:29,861 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 93/100 | Train Loss: 0.1208 | Val rms_score: 0.1132
125
+ 2025-09-27 11:53:36,264 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 94/100 | Train Loss: 0.1260 | Val rms_score: 0.1130
126
+ 2025-09-27 11:53:42,560 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 95/100 | Train Loss: 0.1236 | Val rms_score: 0.1136
127
+ 2025-09-27 11:53:46,106 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 96/100 | Train Loss: 0.1125 | Val rms_score: 0.1135
128
+ 2025-09-27 11:53:52,442 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 97/100 | Train Loss: 0.1243 | Val rms_score: 0.1131
129
+ 2025-09-27 11:53:58,901 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 98/100 | Train Loss: 0.1219 | Val rms_score: 0.1132
130
+ 2025-09-27 11:54:04,659 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 99/100 | Train Loss: 0.1215 | Val rms_score: 0.1132
131
+ 2025-09-27 11:54:11,016 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 100/100 | Train Loss: 0.1160 | Val rms_score: 0.1132
132
+ 2025-09-27 11:54:11,600 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Test rms_score: 0.1150
133
+ 2025-09-27 11:54:12,258 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Starting triplicate run 2 for dataset astrazeneca_ppb at 2025-09-27_11-54-12
134
+ 2025-09-27 11:54:15,560 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 1/100 | Train Loss: 0.9556 | Val rms_score: 0.1347
135
+ 2025-09-27 11:54:15,561 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Global step of best model: 45
136
+ 2025-09-27 11:54:16,447 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Best model saved at epoch 1 with val rms_score: 0.1347
137
+ 2025-09-27 11:54:22,895 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 2/100 | Train Loss: 0.8333 | Val rms_score: 0.1279
138
+ 2025-09-27 11:54:23,311 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Global step of best model: 90
139
+ 2025-09-27 11:54:24,072 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Best model saved at epoch 2 with val rms_score: 0.1279
140
+ 2025-09-27 11:54:30,059 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 3/100 | Train Loss: 0.7821 | Val rms_score: 0.1226
141
+ 2025-09-27 11:54:30,262 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Global step of best model: 135
142
+ 2025-09-27 11:54:30,889 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Best model saved at epoch 3 with val rms_score: 0.1226
143
+ 2025-09-27 11:54:36,643 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 4/100 | Train Loss: 0.6917 | Val rms_score: 0.1194
144
+ 2025-09-27 11:54:36,877 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Global step of best model: 180
145
+ 2025-09-27 11:54:37,921 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Best model saved at epoch 4 with val rms_score: 0.1194
146
+ 2025-09-27 11:54:41,462 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 5/100 | Train Loss: 0.4775 | Val rms_score: 0.1177
147
+ 2025-09-27 11:54:41,680 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Global step of best model: 225
148
+ 2025-09-27 11:54:42,331 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Best model saved at epoch 5 with val rms_score: 0.1177
149
+ 2025-09-27 11:54:48,554 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 6/100 | Train Loss: 0.5972 | Val rms_score: 0.1145
150
+ 2025-09-27 11:54:49,379 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Global step of best model: 270
151
+ 2025-09-27 11:54:50,015 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Best model saved at epoch 6 with val rms_score: 0.1145
152
+ 2025-09-27 11:54:55,998 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 7/100 | Train Loss: 0.6250 | Val rms_score: 0.1137
153
+ 2025-09-27 11:54:56,205 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Global step of best model: 315
154
+ 2025-09-27 11:54:56,918 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Best model saved at epoch 7 with val rms_score: 0.1137
155
+ 2025-09-27 11:55:03,238 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 8/100 | Train Loss: 0.5222 | Val rms_score: 0.1120
156
+ 2025-09-27 11:55:03,451 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Global step of best model: 360
157
+ 2025-09-27 11:55:04,090 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Best model saved at epoch 8 with val rms_score: 0.1120
158
+ 2025-09-27 11:55:06,986 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 9/100 | Train Loss: 0.6844 | Val rms_score: 0.1114
159
+ 2025-09-27 11:55:07,199 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Global step of best model: 405
160
+ 2025-09-27 11:55:07,863 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Best model saved at epoch 9 with val rms_score: 0.1114
161
+ 2025-09-27 11:55:14,338 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 10/100 | Train Loss: 0.4861 | Val rms_score: 0.1115
162
+ 2025-09-27 11:55:20,739 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 11/100 | Train Loss: 0.4333 | Val rms_score: 0.1103
163
+ 2025-09-27 11:55:21,531 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Global step of best model: 495
164
+ 2025-09-27 11:55:22,174 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Best model saved at epoch 11 with val rms_score: 0.1103
165
+ 2025-09-27 11:55:28,430 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 12/100 | Train Loss: 0.4344 | Val rms_score: 0.1098
166
+ 2025-09-27 11:55:28,639 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Global step of best model: 540
167
+ 2025-09-27 11:55:29,276 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Best model saved at epoch 12 with val rms_score: 0.1098
168
+ 2025-09-27 11:55:35,117 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 13/100 | Train Loss: 0.3861 | Val rms_score: 0.1100
169
+ 2025-09-27 11:55:38,808 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 14/100 | Train Loss: 0.4000 | Val rms_score: 0.1100
170
+ 2025-09-27 11:55:45,002 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 15/100 | Train Loss: 0.3694 | Val rms_score: 0.1098
171
+ 2025-09-27 11:55:45,216 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Global step of best model: 675
172
+ 2025-09-27 11:55:45,866 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Best model saved at epoch 15 with val rms_score: 0.1098
173
+ 2025-09-27 11:55:52,101 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 16/100 | Train Loss: 0.3438 | Val rms_score: 0.1105
174
+ 2025-09-27 11:55:58,483 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 17/100 | Train Loss: 0.3333 | Val rms_score: 0.1101
175
+ 2025-09-27 11:56:01,100 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 18/100 | Train Loss: 0.3422 | Val rms_score: 0.1101
176
+ 2025-09-27 11:56:06,664 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 19/100 | Train Loss: 0.3083 | Val rms_score: 0.1110
177
+ 2025-09-27 11:56:11,617 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 20/100 | Train Loss: 0.2833 | Val rms_score: 0.1105
178
+ 2025-09-27 11:56:17,108 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 21/100 | Train Loss: 0.2792 | Val rms_score: 0.1120
179
+ 2025-09-27 11:56:23,360 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 22/100 | Train Loss: 0.2736 | Val rms_score: 0.1113
180
+ 2025-09-27 11:56:29,800 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 23/100 | Train Loss: 0.2482 | Val rms_score: 0.1125
181
+ 2025-09-27 11:56:32,634 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 24/100 | Train Loss: 0.2528 | Val rms_score: 0.1114
182
+ 2025-09-27 11:56:37,790 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 25/100 | Train Loss: 0.2325 | Val rms_score: 0.1110
183
+ 2025-09-27 11:56:43,233 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 26/100 | Train Loss: 0.2375 | Val rms_score: 0.1117
184
+ 2025-09-27 11:56:49,327 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 27/100 | Train Loss: 0.2115 | Val rms_score: 0.1122
185
+ 2025-09-27 11:56:55,906 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 28/100 | Train Loss: 0.2194 | Val rms_score: 0.1120
186
+ 2025-09-27 11:56:58,303 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 29/100 | Train Loss: 0.2078 | Val rms_score: 0.1114
187
+ 2025-09-27 11:57:03,746 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 30/100 | Train Loss: 0.2139 | Val rms_score: 0.1119
188
+ 2025-09-27 11:57:09,244 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 31/100 | Train Loss: 0.2139 | Val rms_score: 0.1122
189
+ 2025-09-27 11:57:15,183 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 32/100 | Train Loss: 0.2078 | Val rms_score: 0.1119
190
+ 2025-09-27 11:57:20,883 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 33/100 | Train Loss: 0.2028 | Val rms_score: 0.1124
191
+ 2025-09-27 11:57:23,226 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 34/100 | Train Loss: 0.1812 | Val rms_score: 0.1130
192
+ 2025-09-27 11:57:28,334 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 35/100 | Train Loss: 0.1833 | Val rms_score: 0.1130
193
+ 2025-09-27 11:57:33,703 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 36/100 | Train Loss: 0.1734 | Val rms_score: 0.1137
194
+ 2025-09-27 11:57:39,601 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 37/100 | Train Loss: 0.1847 | Val rms_score: 0.1122
195
+ 2025-09-27 11:57:45,293 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 38/100 | Train Loss: 0.1711 | Val rms_score: 0.1125
196
+ 2025-09-27 11:57:50,222 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 39/100 | Train Loss: 0.1771 | Val rms_score: 0.1124
197
+ 2025-09-27 11:57:52,699 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 40/100 | Train Loss: 0.1889 | Val rms_score: 0.1128
198
+ 2025-09-27 11:57:58,141 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 41/100 | Train Loss: 0.1681 | Val rms_score: 0.1129
199
+ 2025-09-27 11:58:03,772 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 42/100 | Train Loss: 0.1750 | Val rms_score: 0.1131
200
+ 2025-09-27 11:58:09,293 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 43/100 | Train Loss: 0.1830 | Val rms_score: 0.1129
201
+ 2025-09-27 11:58:14,087 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 44/100 | Train Loss: 0.1667 | Val rms_score: 0.1126
202
+ 2025-09-27 11:58:17,671 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 45/100 | Train Loss: 0.1762 | Val rms_score: 0.1122
203
+ 2025-09-27 11:58:22,661 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 46/100 | Train Loss: 0.1604 | Val rms_score: 0.1133
204
+ 2025-09-27 11:58:28,546 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 47/100 | Train Loss: 0.1844 | Val rms_score: 0.1127
205
+ 2025-09-27 11:58:34,175 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 48/100 | Train Loss: 0.1660 | Val rms_score: 0.1129
206
+ 2025-09-27 11:58:39,325 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 49/100 | Train Loss: 0.1383 | Val rms_score: 0.1135
207
+ 2025-09-27 11:58:44,729 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 50/100 | Train Loss: 0.1479 | Val rms_score: 0.1136
208
+ 2025-09-27 11:58:46,996 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 51/100 | Train Loss: 0.1542 | Val rms_score: 0.1138
209
+ 2025-09-27 11:58:52,975 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 52/100 | Train Loss: 0.1547 | Val rms_score: 0.1136
210
+ 2025-09-27 11:58:58,505 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 53/100 | Train Loss: 0.1451 | Val rms_score: 0.1140
211
+ 2025-09-27 11:59:03,996 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 54/100 | Train Loss: 0.1583 | Val rms_score: 0.1143
212
+ 2025-09-27 11:59:09,624 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 55/100 | Train Loss: 0.1417 | Val rms_score: 0.1137
213
+ 2025-09-27 11:59:12,403 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 56/100 | Train Loss: 0.1523 | Val rms_score: 0.1132
214
+ 2025-09-27 11:59:18,637 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 57/100 | Train Loss: 0.1451 | Val rms_score: 0.1133
215
+ 2025-09-27 11:59:24,561 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 58/100 | Train Loss: 0.1562 | Val rms_score: 0.1132
216
+ 2025-09-27 11:59:29,456 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 59/100 | Train Loss: 0.1451 | Val rms_score: 0.1140
217
+ 2025-09-27 11:59:35,133 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 60/100 | Train Loss: 0.1410 | Val rms_score: 0.1136
218
+ 2025-09-27 11:59:40,148 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 61/100 | Train Loss: 0.1444 | Val rms_score: 0.1137
219
+ 2025-09-27 11:59:43,606 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 62/100 | Train Loss: 0.1389 | Val rms_score: 0.1139
220
+ 2025-09-27 11:59:49,067 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 63/100 | Train Loss: 0.1437 | Val rms_score: 0.1137
221
+ 2025-09-27 11:59:54,489 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 64/100 | Train Loss: 0.1361 | Val rms_score: 0.1135
222
+ 2025-09-27 11:59:59,933 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 65/100 | Train Loss: 0.1294 | Val rms_score: 0.1137
223
+ 2025-09-27 12:00:04,991 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 66/100 | Train Loss: 0.1424 | Val rms_score: 0.1141
224
+ 2025-09-27 12:00:09,468 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 67/100 | Train Loss: 0.1594 | Val rms_score: 0.1137
225
+ 2025-09-27 12:00:14,484 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 68/100 | Train Loss: 0.1389 | Val rms_score: 0.1138
226
+ 2025-09-27 12:00:19,991 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 69/100 | Train Loss: 0.1453 | Val rms_score: 0.1138
227
+ 2025-09-27 12:00:25,336 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 70/100 | Train Loss: 0.1361 | Val rms_score: 0.1142
228
+ 2025-09-27 12:00:30,082 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 71/100 | Train Loss: 0.1424 | Val rms_score: 0.1141
229
+ 2025-09-27 12:00:33,416 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 72/100 | Train Loss: 0.1383 | Val rms_score: 0.1143
230
+ 2025-09-27 12:00:38,480 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 73/100 | Train Loss: 0.1201 | Val rms_score: 0.1140
231
+ 2025-09-27 12:00:44,104 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 74/100 | Train Loss: 0.1396 | Val rms_score: 0.1140
232
+ 2025-09-27 12:00:49,252 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 75/100 | Train Loss: 0.1375 | Val rms_score: 0.1140
233
+ 2025-09-27 12:00:54,304 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 76/100 | Train Loss: 0.1469 | Val rms_score: 0.1138
234
+ 2025-09-27 12:01:00,422 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 77/100 | Train Loss: 0.1292 | Val rms_score: 0.1139
235
+ 2025-09-27 12:01:02,877 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 78/100 | Train Loss: 0.0977 | Val rms_score: 0.1140
236
+ 2025-09-27 12:01:08,775 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 79/100 | Train Loss: 0.1299 | Val rms_score: 0.1142
237
+ 2025-09-27 12:01:14,672 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 80/100 | Train Loss: 0.1257 | Val rms_score: 0.1145
238
+ 2025-09-27 12:01:20,047 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 81/100 | Train Loss: 0.1313 | Val rms_score: 0.1141
239
+ 2025-09-27 12:01:26,375 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 82/100 | Train Loss: 0.1306 | Val rms_score: 0.1145
240
+ 2025-09-27 12:01:28,612 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 83/100 | Train Loss: 0.1214 | Val rms_score: 0.1143
241
+ 2025-09-27 12:01:33,940 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 84/100 | Train Loss: 0.1299 | Val rms_score: 0.1143
242
+ 2025-09-27 12:01:39,237 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 85/100 | Train Loss: 0.1281 | Val rms_score: 0.1139
243
+ 2025-09-27 12:01:44,624 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 86/100 | Train Loss: 0.1243 | Val rms_score: 0.1139
244
+ 2025-09-27 12:01:50,799 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 87/100 | Train Loss: 0.1323 | Val rms_score: 0.1138
245
+ 2025-09-27 12:01:55,993 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 88/100 | Train Loss: 0.1222 | Val rms_score: 0.1138
246
+ 2025-09-27 12:02:00,129 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 89/100 | Train Loss: 0.1258 | Val rms_score: 0.1141
247
+ 2025-09-27 12:02:05,299 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 90/100 | Train Loss: 0.1326 | Val rms_score: 0.1139
248
+ 2025-09-27 12:02:10,706 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 91/100 | Train Loss: 0.1236 | Val rms_score: 0.1140
249
+ 2025-09-27 12:02:16,390 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 92/100 | Train Loss: 0.1266 | Val rms_score: 0.1139
250
+ 2025-09-27 12:02:21,343 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 93/100 | Train Loss: 0.1243 | Val rms_score: 0.1139
251
+ 2025-09-27 12:02:24,250 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 94/100 | Train Loss: 0.1255 | Val rms_score: 0.1141
252
+ 2025-09-27 12:02:29,898 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 95/100 | Train Loss: 0.1222 | Val rms_score: 0.1141
253
+ 2025-09-27 12:02:35,903 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 96/100 | Train Loss: 0.1000 | Val rms_score: 0.1145
254
+ 2025-09-27 12:02:43,176 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 97/100 | Train Loss: 0.1278 | Val rms_score: 0.1147
255
+ 2025-09-27 12:02:49,014 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 98/100 | Train Loss: 0.1523 | Val rms_score: 0.1143
256
+ 2025-09-27 12:02:51,876 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 99/100 | Train Loss: 0.1201 | Val rms_score: 0.1147
257
+ 2025-09-27 12:02:57,169 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 100/100 | Train Loss: 0.1299 | Val rms_score: 0.1146
258
+ 2025-09-27 12:02:57,801 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Test rms_score: 0.1148
259
+ 2025-09-27 12:02:58,466 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Starting triplicate run 3 for dataset astrazeneca_ppb at 2025-09-27_12-02-58
260
+ 2025-09-27 12:03:04,113 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 1/100 | Train Loss: 0.9611 | Val rms_score: 0.1370
261
+ 2025-09-27 12:03:04,113 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Global step of best model: 45
262
+ 2025-09-27 12:03:05,535 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Best model saved at epoch 1 with val rms_score: 0.1370
263
+ 2025-09-27 12:03:10,680 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 2/100 | Train Loss: 0.8444 | Val rms_score: 0.1282
264
+ 2025-09-27 12:03:10,860 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Global step of best model: 90
265
+ 2025-09-27 12:03:11,573 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Best model saved at epoch 2 with val rms_score: 0.1282
266
+ 2025-09-27 12:03:17,236 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 3/100 | Train Loss: 0.7929 | Val rms_score: 0.1227
267
+ 2025-09-27 12:03:17,438 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Global step of best model: 135
268
+ 2025-09-27 12:03:18,070 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Best model saved at epoch 3 with val rms_score: 0.1227
269
+ 2025-09-27 12:03:20,546 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 4/100 | Train Loss: 0.7000 | Val rms_score: 0.1200
270
+ 2025-09-27 12:03:20,750 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Global step of best model: 180
271
+ 2025-09-27 12:03:21,373 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Best model saved at epoch 4 with val rms_score: 0.1200
272
+ 2025-09-27 12:03:26,930 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 5/100 | Train Loss: 0.6375 | Val rms_score: 0.1175
273
+ 2025-09-27 12:03:27,138 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Global step of best model: 225
274
+ 2025-09-27 12:03:27,778 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Best model saved at epoch 5 with val rms_score: 0.1175
275
+ 2025-09-27 12:03:33,543 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 6/100 | Train Loss: 0.6222 | Val rms_score: 0.1149
276
+ 2025-09-27 12:03:34,448 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Global step of best model: 270
277
+ 2025-09-27 12:03:35,218 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Best model saved at epoch 6 with val rms_score: 0.1149
278
+ 2025-09-27 12:03:41,058 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 7/100 | Train Loss: 0.6417 | Val rms_score: 0.1135
279
+ 2025-09-27 12:03:41,260 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Global step of best model: 315
280
+ 2025-09-27 12:03:42,355 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Best model saved at epoch 7 with val rms_score: 0.1135
281
+ 2025-09-27 12:03:45,105 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 8/100 | Train Loss: 0.5333 | Val rms_score: 0.1134
282
+ 2025-09-27 12:03:45,307 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Global step of best model: 360
283
+ 2025-09-27 12:03:45,936 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Best model saved at epoch 8 with val rms_score: 0.1134
284
+ 2025-09-27 12:03:51,528 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 9/100 | Train Loss: 0.6188 | Val rms_score: 0.1120
285
+ 2025-09-27 12:03:51,723 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Global step of best model: 405
286
+ 2025-09-27 12:03:52,375 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Best model saved at epoch 9 with val rms_score: 0.1120
287
+ 2025-09-27 12:03:57,878 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 10/100 | Train Loss: 0.4833 | Val rms_score: 0.1119
288
+ 2025-09-27 12:03:58,085 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Global step of best model: 450
289
+ 2025-09-27 12:03:58,751 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Best model saved at epoch 10 with val rms_score: 0.1119
290
+ 2025-09-27 12:04:04,591 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 11/100 | Train Loss: 0.4694 | Val rms_score: 0.1113
291
+ 2025-09-27 12:04:05,370 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Global step of best model: 495
292
+ 2025-09-27 12:04:06,048 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Best model saved at epoch 11 with val rms_score: 0.1113
293
+ 2025-09-27 12:04:12,034 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 12/100 | Train Loss: 0.4437 | Val rms_score: 0.1105
294
+ 2025-09-27 12:04:12,253 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Global step of best model: 540
295
+ 2025-09-27 12:04:12,989 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Best model saved at epoch 12 with val rms_score: 0.1105
296
+ 2025-09-27 12:04:17,513 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 13/100 | Train Loss: 0.4111 | Val rms_score: 0.1103
297
+ 2025-09-27 12:04:17,826 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Global step of best model: 585
298
+ 2025-09-27 12:04:18,527 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Best model saved at epoch 13 with val rms_score: 0.1103
299
+ 2025-09-27 12:04:23,757 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 14/100 | Train Loss: 0.4125 | Val rms_score: 0.1109
300
+ 2025-09-27 12:04:29,497 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 15/100 | Train Loss: 0.3778 | Val rms_score: 0.1102
301
+ 2025-09-27 12:04:29,703 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Global step of best model: 675
302
+ 2025-09-27 12:04:30,352 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Best model saved at epoch 15 with val rms_score: 0.1102
303
+ 2025-09-27 12:04:36,008 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 16/100 | Train Loss: 0.3531 | Val rms_score: 0.1105
304
+ 2025-09-27 12:04:39,467 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 17/100 | Train Loss: 0.3417 | Val rms_score: 0.1108
305
+ 2025-09-27 12:04:44,929 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 18/100 | Train Loss: 0.3047 | Val rms_score: 0.1102
306
+ 2025-09-27 12:04:49,822 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 19/100 | Train Loss: 0.3014 | Val rms_score: 0.1111
307
+ 2025-09-27 12:04:55,269 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 20/100 | Train Loss: 0.3056 | Val rms_score: 0.1115
308
+ 2025-09-27 12:05:00,687 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 21/100 | Train Loss: 0.2861 | Val rms_score: 0.1100
309
+ 2025-09-27 12:05:01,631 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Global step of best model: 945
310
+ 2025-09-27 12:05:02,670 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Best model saved at epoch 21 with val rms_score: 0.1100
311
+ 2025-09-27 12:05:08,091 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 22/100 | Train Loss: 0.2792 | Val rms_score: 0.1106
312
+ 2025-09-27 12:05:11,792 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 23/100 | Train Loss: 0.2518 | Val rms_score: 0.1105
313
+ 2025-09-27 12:05:17,186 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 24/100 | Train Loss: 0.2542 | Val rms_score: 0.1108
314
+ 2025-09-27 12:05:22,580 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 25/100 | Train Loss: 0.2575 | Val rms_score: 0.1123
315
+ 2025-09-27 12:05:27,342 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 26/100 | Train Loss: 0.2528 | Val rms_score: 0.1109
316
+ 2025-09-27 12:05:33,296 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 27/100 | Train Loss: 0.2729 | Val rms_score: 0.1116
317
+ 2025-09-27 12:05:35,990 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 28/100 | Train Loss: 0.2403 | Val rms_score: 0.1114
318
+ 2025-09-27 12:05:41,354 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 29/100 | Train Loss: 0.2422 | Val rms_score: 0.1122
319
+ 2025-09-27 12:05:46,919 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 30/100 | Train Loss: 0.2236 | Val rms_score: 0.1129
320
+ 2025-09-27 12:05:52,098 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 31/100 | Train Loss: 0.2222 | Val rms_score: 0.1117
321
+ 2025-09-27 12:05:58,170 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 32/100 | Train Loss: 0.2188 | Val rms_score: 0.1118
322
+ 2025-09-27 12:06:00,532 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 33/100 | Train Loss: 0.2028 | Val rms_score: 0.1125
323
+ 2025-09-27 12:06:06,429 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 34/100 | Train Loss: 0.1823 | Val rms_score: 0.1125
324
+ 2025-09-27 12:06:11,889 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 35/100 | Train Loss: 0.1986 | Val rms_score: 0.1123
325
+ 2025-09-27 12:06:17,300 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 36/100 | Train Loss: 0.1773 | Val rms_score: 0.1139
326
+ 2025-09-27 12:06:23,803 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 37/100 | Train Loss: 0.1931 | Val rms_score: 0.1124
327
+ 2025-09-27 12:06:29,897 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 38/100 | Train Loss: 0.1758 | Val rms_score: 0.1125
328
+ 2025-09-27 12:06:33,223 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 39/100 | Train Loss: 0.1861 | Val rms_score: 0.1135
329
+ 2025-09-27 12:06:39,250 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 40/100 | Train Loss: 0.1861 | Val rms_score: 0.1132
330
+ 2025-09-27 12:06:44,373 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 41/100 | Train Loss: 0.1792 | Val rms_score: 0.1143
331
+ 2025-09-27 12:06:50,461 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 42/100 | Train Loss: 0.1778 | Val rms_score: 0.1127
332
+ 2025-09-27 12:06:55,601 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 43/100 | Train Loss: 0.1759 | Val rms_score: 0.1131
333
+ 2025-09-27 12:06:58,187 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 44/100 | Train Loss: 0.1611 | Val rms_score: 0.1130
334
+ 2025-09-27 12:07:04,355 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 45/100 | Train Loss: 0.1850 | Val rms_score: 0.1134
335
+ 2025-09-27 12:07:10,007 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 46/100 | Train Loss: 0.1722 | Val rms_score: 0.1134
336
+ 2025-09-27 12:07:16,242 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 47/100 | Train Loss: 0.1646 | Val rms_score: 0.1134
337
+ 2025-09-27 12:07:21,340 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 48/100 | Train Loss: 0.1604 | Val rms_score: 0.1130
338
+ 2025-09-27 12:07:24,276 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 49/100 | Train Loss: 0.1508 | Val rms_score: 0.1135
339
+ 2025-09-27 12:07:29,408 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 50/100 | Train Loss: 0.1667 | Val rms_score: 0.1131
340
+ 2025-09-27 12:07:34,510 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 51/100 | Train Loss: 0.1597 | Val rms_score: 0.1138
341
+ 2025-09-27 12:07:40,410 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 52/100 | Train Loss: 0.1562 | Val rms_score: 0.1137
342
+ 2025-09-27 12:07:45,645 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 53/100 | Train Loss: 0.1486 | Val rms_score: 0.1135
343
+ 2025-09-27 12:07:51,546 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 54/100 | Train Loss: 0.1417 | Val rms_score: 0.1133
344
+ 2025-09-27 12:07:54,318 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 55/100 | Train Loss: 0.1535 | Val rms_score: 0.1138
345
+ 2025-09-27 12:08:00,175 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 56/100 | Train Loss: 0.1461 | Val rms_score: 0.1135
346
+ 2025-09-27 12:08:05,997 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 57/100 | Train Loss: 0.1437 | Val rms_score: 0.1135
347
+ 2025-09-27 12:08:11,150 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 58/100 | Train Loss: 0.1211 | Val rms_score: 0.1138
348
+ 2025-09-27 12:08:16,706 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 59/100 | Train Loss: 0.1507 | Val rms_score: 0.1134
349
+ 2025-09-27 12:08:19,806 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 60/100 | Train Loss: 0.1486 | Val rms_score: 0.1133
350
+ 2025-09-27 12:08:26,514 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 61/100 | Train Loss: 0.1458 | Val rms_score: 0.1129
351
+ 2025-09-27 12:08:33,335 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 62/100 | Train Loss: 0.1458 | Val rms_score: 0.1131
352
+ 2025-09-27 12:08:39,200 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 63/100 | Train Loss: 0.1420 | Val rms_score: 0.1130
353
+ 2025-09-27 12:08:45,944 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 64/100 | Train Loss: 0.1528 | Val rms_score: 0.1133
354
+ 2025-09-27 12:08:49,219 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 65/100 | Train Loss: 0.1525 | Val rms_score: 0.1132
355
+ 2025-09-27 12:08:54,753 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 66/100 | Train Loss: 0.1347 | Val rms_score: 0.1137
356
+ 2025-09-27 12:09:01,391 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 67/100 | Train Loss: 0.1521 | Val rms_score: 0.1138
357
+ 2025-09-27 12:09:06,784 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 68/100 | Train Loss: 0.1347 | Val rms_score: 0.1136
358
+ 2025-09-27 12:09:12,081 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 69/100 | Train Loss: 0.1703 | Val rms_score: 0.1137
359
+ 2025-09-27 12:09:14,391 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 70/100 | Train Loss: 0.1465 | Val rms_score: 0.1137
360
+ 2025-09-27 12:09:19,794 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 71/100 | Train Loss: 0.1368 | Val rms_score: 0.1140
361
+ 2025-09-27 12:09:25,613 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 72/100 | Train Loss: 0.1430 | Val rms_score: 0.1135
362
+ 2025-09-27 12:09:31,324 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 73/100 | Train Loss: 0.1347 | Val rms_score: 0.1138
363
+ 2025-09-27 12:09:36,721 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 74/100 | Train Loss: 0.1406 | Val rms_score: 0.1139
364
+ 2025-09-27 12:09:38,927 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 75/100 | Train Loss: 0.1306 | Val rms_score: 0.1139
365
+ 2025-09-27 12:09:44,547 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 76/100 | Train Loss: 0.1180 | Val rms_score: 0.1136
366
+ 2025-09-27 12:09:50,356 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 77/100 | Train Loss: 0.1403 | Val rms_score: 0.1135
367
+ 2025-09-27 12:09:55,597 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 78/100 | Train Loss: 0.1562 | Val rms_score: 0.1132
368
+ 2025-09-27 12:10:00,967 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 79/100 | Train Loss: 0.1410 | Val rms_score: 0.1138
369
+ 2025-09-27 12:10:05,798 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 80/100 | Train Loss: 0.1313 | Val rms_score: 0.1141
370
+ 2025-09-27 12:10:08,502 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 81/100 | Train Loss: 0.1306 | Val rms_score: 0.1141
371
+ 2025-09-27 12:10:14,223 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 82/100 | Train Loss: 0.1271 | Val rms_score: 0.1137
372
+ 2025-09-27 12:10:19,697 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 83/100 | Train Loss: 0.1241 | Val rms_score: 0.1137
373
+ 2025-09-27 12:10:24,965 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 84/100 | Train Loss: 0.1326 | Val rms_score: 0.1132
374
+ 2025-09-27 12:10:29,834 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 85/100 | Train Loss: 0.1294 | Val rms_score: 0.1134
375
+ 2025-09-27 12:10:35,113 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 86/100 | Train Loss: 0.1257 | Val rms_score: 0.1139
376
+ 2025-09-27 12:10:37,839 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 87/100 | Train Loss: 0.1146 | Val rms_score: 0.1136
377
+ 2025-09-27 12:10:42,971 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 88/100 | Train Loss: 0.1285 | Val rms_score: 0.1137
378
+ 2025-09-27 12:10:49,202 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 89/100 | Train Loss: 0.1781 | Val rms_score: 0.1137
379
+ 2025-09-27 12:10:54,539 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 90/100 | Train Loss: 0.1229 | Val rms_score: 0.1135
380
+ 2025-09-27 12:10:59,505 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 91/100 | Train Loss: 0.1250 | Val rms_score: 0.1138
381
+ 2025-09-27 12:11:02,178 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 92/100 | Train Loss: 0.1273 | Val rms_score: 0.1133
382
+ 2025-09-27 12:11:07,403 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 93/100 | Train Loss: 0.1271 | Val rms_score: 0.1137
383
+ 2025-09-27 12:11:12,716 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 94/100 | Train Loss: 0.1365 | Val rms_score: 0.1138
384
+ 2025-09-27 12:11:17,834 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 95/100 | Train Loss: 0.1278 | Val rms_score: 0.1139
385
+ 2025-09-27 12:11:23,194 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 96/100 | Train Loss: 0.1281 | Val rms_score: 0.1138
386
+ 2025-09-27 12:11:28,507 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 97/100 | Train Loss: 0.1264 | Val rms_score: 0.1138
387
+ 2025-09-27 12:11:31,987 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 98/100 | Train Loss: 0.1266 | Val rms_score: 0.1136
388
+ 2025-09-27 12:11:36,979 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 99/100 | Train Loss: 0.1271 | Val rms_score: 0.1140
389
+ 2025-09-27 12:11:42,141 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 100/100 | Train Loss: 0.1271 | Val rms_score: 0.1139
390
+ 2025-09-27 12:11:42,704 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Test rms_score: 0.1153
391
+ 2025-09-27 12:11:43,268 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Final Triplicate Test Results — Avg rms_score: 0.1150, Std Dev: 0.0002
logs_modchembert_regression_ModChemBERT-MLM-DAPT-TAFT-OPT/modchembert_deepchem_splits_run_astrazeneca_solubility_epochs100_batch_size32_20250927_155133.log ADDED
@@ -0,0 +1,391 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ 2025-09-27 15:51:33,335 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Running benchmark for dataset: astrazeneca_solubility
2
+ 2025-09-27 15:51:33,335 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - dataset: astrazeneca_solubility, tasks: ['y'], epochs: 100, learning rate: 1e-05, transform: True
3
+ 2025-09-27 15:51:33,340 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Starting triplicate run 1 for dataset astrazeneca_solubility at 2025-09-27_15-51-33
4
+ 2025-09-27 15:51:38,624 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 1/100 | Train Loss: 0.9944 | Val rms_score: 0.9840
5
+ 2025-09-27 15:51:38,625 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Global step of best model: 45
6
+ 2025-09-27 15:51:42,196 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 1 with val rms_score: 0.9840
7
+ 2025-09-27 15:51:48,951 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 2/100 | Train Loss: 0.8611 | Val rms_score: 0.9530
8
+ 2025-09-27 15:51:49,135 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Global step of best model: 90
9
+ 2025-09-27 15:51:49,993 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 2 with val rms_score: 0.9530
10
+ 2025-09-27 15:51:57,853 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 3/100 | Train Loss: 0.7571 | Val rms_score: 0.9340
11
+ 2025-09-27 15:51:58,038 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Global step of best model: 135
12
+ 2025-09-27 15:51:58,625 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 3 with val rms_score: 0.9340
13
+ 2025-09-27 15:52:04,731 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 4/100 | Train Loss: 0.7444 | Val rms_score: 0.9168
14
+ 2025-09-27 15:52:04,915 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Global step of best model: 180
15
+ 2025-09-27 15:52:05,494 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 4 with val rms_score: 0.9168
16
+ 2025-09-27 15:52:11,593 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 5/100 | Train Loss: 0.6850 | Val rms_score: 0.9027
17
+ 2025-09-27 15:52:11,779 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Global step of best model: 225
18
+ 2025-09-27 15:52:12,374 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 5 with val rms_score: 0.9027
19
+ 2025-09-27 15:52:18,587 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 6/100 | Train Loss: 0.6528 | Val rms_score: 0.8862
20
+ 2025-09-27 15:52:19,243 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Global step of best model: 270
21
+ 2025-09-27 15:52:19,877 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 6 with val rms_score: 0.8862
22
+ 2025-09-27 15:52:25,874 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 7/100 | Train Loss: 0.6250 | Val rms_score: 0.8835
23
+ 2025-09-27 15:52:26,060 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Global step of best model: 315
24
+ 2025-09-27 15:52:26,643 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 7 with val rms_score: 0.8835
25
+ 2025-09-27 15:52:33,403 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 8/100 | Train Loss: 0.5722 | Val rms_score: 0.8967
26
+ 2025-09-27 15:52:39,578 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 9/100 | Train Loss: 0.7594 | Val rms_score: 0.8926
27
+ 2025-09-27 15:52:45,787 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 10/100 | Train Loss: 0.5028 | Val rms_score: 0.8639
28
+ 2025-09-27 15:52:45,978 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Global step of best model: 450
29
+ 2025-09-27 15:52:46,557 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 10 with val rms_score: 0.8639
30
+ 2025-09-27 15:52:52,520 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 11/100 | Train Loss: 0.4861 | Val rms_score: 0.8631
31
+ 2025-09-27 15:52:53,039 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Global step of best model: 495
32
+ 2025-09-27 15:52:53,631 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 11 with val rms_score: 0.8631
33
+ 2025-09-27 15:52:59,609 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 12/100 | Train Loss: 0.4844 | Val rms_score: 0.8749
34
+ 2025-09-27 15:53:05,458 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 13/100 | Train Loss: 0.4889 | Val rms_score: 0.8789
35
+ 2025-09-27 15:53:11,356 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 14/100 | Train Loss: 0.4375 | Val rms_score: 0.8929
36
+ 2025-09-27 15:53:17,369 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 15/100 | Train Loss: 0.4250 | Val rms_score: 0.8801
37
+ 2025-09-27 15:53:23,305 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 16/100 | Train Loss: 0.4219 | Val rms_score: 0.8765
38
+ 2025-09-27 15:53:29,658 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 17/100 | Train Loss: 0.3972 | Val rms_score: 0.8672
39
+ 2025-09-27 15:53:35,492 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 18/100 | Train Loss: 0.4062 | Val rms_score: 0.8563
40
+ 2025-09-27 15:53:35,669 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Global step of best model: 810
41
+ 2025-09-27 15:53:36,251 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 18 with val rms_score: 0.8563
42
+ 2025-09-27 15:53:42,335 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 19/100 | Train Loss: 0.3806 | Val rms_score: 0.8704
43
+ 2025-09-27 15:53:48,144 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 20/100 | Train Loss: 0.4222 | Val rms_score: 0.8611
44
+ 2025-09-27 15:53:54,791 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 21/100 | Train Loss: 0.3861 | Val rms_score: 0.8508
45
+ 2025-09-27 15:53:55,339 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Global step of best model: 945
46
+ 2025-09-27 15:53:55,958 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 21 with val rms_score: 0.8508
47
+ 2025-09-27 15:54:02,490 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 22/100 | Train Loss: 0.3542 | Val rms_score: 0.8604
48
+ 2025-09-27 15:54:09,437 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 23/100 | Train Loss: 0.3571 | Val rms_score: 0.8520
49
+ 2025-09-27 15:54:15,407 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 24/100 | Train Loss: 0.3306 | Val rms_score: 0.8802
50
+ 2025-09-27 15:54:21,168 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 25/100 | Train Loss: 0.3250 | Val rms_score: 0.8663
51
+ 2025-09-27 15:54:26,969 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 26/100 | Train Loss: 0.3222 | Val rms_score: 0.8597
52
+ 2025-09-27 15:54:33,345 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 27/100 | Train Loss: 0.3167 | Val rms_score: 0.8682
53
+ 2025-09-27 15:54:40,257 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 28/100 | Train Loss: 0.3167 | Val rms_score: 0.8540
54
+ 2025-09-27 15:54:46,472 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 29/100 | Train Loss: 0.2375 | Val rms_score: 0.8525
55
+ 2025-09-27 15:54:53,531 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 30/100 | Train Loss: 0.3014 | Val rms_score: 0.8597
56
+ 2025-09-27 15:54:59,974 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 31/100 | Train Loss: 0.2986 | Val rms_score: 0.8536
57
+ 2025-09-27 15:55:06,355 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 32/100 | Train Loss: 0.2984 | Val rms_score: 0.8680
58
+ 2025-09-27 15:55:12,234 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 33/100 | Train Loss: 0.2806 | Val rms_score: 0.8639
59
+ 2025-09-27 15:55:18,388 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 34/100 | Train Loss: 0.3083 | Val rms_score: 0.8600
60
+ 2025-09-27 15:55:24,611 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 35/100 | Train Loss: 0.2667 | Val rms_score: 0.8674
61
+ 2025-09-27 15:55:30,469 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 36/100 | Train Loss: 0.2828 | Val rms_score: 0.8715
62
+ 2025-09-27 15:55:36,557 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 37/100 | Train Loss: 0.2931 | Val rms_score: 0.8814
63
+ 2025-09-27 15:55:42,475 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 38/100 | Train Loss: 0.2672 | Val rms_score: 0.8739
64
+ 2025-09-27 15:55:48,371 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 39/100 | Train Loss: 0.2611 | Val rms_score: 0.8740
65
+ 2025-09-27 15:55:54,337 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 40/100 | Train Loss: 0.2903 | Val rms_score: 0.8443
66
+ 2025-09-27 15:55:54,488 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Global step of best model: 1800
67
+ 2025-09-27 15:55:55,063 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 40 with val rms_score: 0.8443
68
+ 2025-09-27 15:56:01,038 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 41/100 | Train Loss: 0.2778 | Val rms_score: 0.8530
69
+ 2025-09-27 15:56:07,372 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 42/100 | Train Loss: 0.2806 | Val rms_score: 0.8804
70
+ 2025-09-27 15:56:13,302 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 43/100 | Train Loss: 0.2750 | Val rms_score: 0.8692
71
+ 2025-09-27 15:56:19,223 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 44/100 | Train Loss: 0.2472 | Val rms_score: 0.8710
72
+ 2025-09-27 15:56:26,495 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 45/100 | Train Loss: 0.2675 | Val rms_score: 0.8546
73
+ 2025-09-27 15:56:32,605 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 46/100 | Train Loss: 0.2444 | Val rms_score: 0.8496
74
+ 2025-09-27 15:56:38,806 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 47/100 | Train Loss: 0.2479 | Val rms_score: 0.8527
75
+ 2025-09-27 15:56:44,789 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 48/100 | Train Loss: 0.2417 | Val rms_score: 0.8607
76
+ 2025-09-27 15:56:50,808 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 49/100 | Train Loss: 0.1688 | Val rms_score: 0.8563
77
+ 2025-09-27 15:56:56,894 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 50/100 | Train Loss: 0.2250 | Val rms_score: 0.8477
78
+ 2025-09-27 15:57:02,723 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 51/100 | Train Loss: 0.2278 | Val rms_score: 0.8660
79
+ 2025-09-27 15:57:10,337 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 52/100 | Train Loss: 0.2328 | Val rms_score: 0.8595
80
+ 2025-09-27 15:57:16,423 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 53/100 | Train Loss: 0.2236 | Val rms_score: 0.8621
81
+ 2025-09-27 15:57:22,392 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 54/100 | Train Loss: 0.2323 | Val rms_score: 0.8561
82
+ 2025-09-27 15:57:28,505 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 55/100 | Train Loss: 0.2222 | Val rms_score: 0.8590
83
+ 2025-09-27 15:57:34,542 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 56/100 | Train Loss: 0.2297 | Val rms_score: 0.8649
84
+ 2025-09-27 15:57:40,915 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 57/100 | Train Loss: 0.2194 | Val rms_score: 0.8536
85
+ 2025-09-27 15:57:46,840 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 58/100 | Train Loss: 0.2328 | Val rms_score: 0.8636
86
+ 2025-09-27 15:57:52,755 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 59/100 | Train Loss: 0.2194 | Val rms_score: 0.8608
87
+ 2025-09-27 15:57:58,841 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 60/100 | Train Loss: 0.2069 | Val rms_score: 0.8570
88
+ 2025-09-27 15:58:04,746 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 61/100 | Train Loss: 0.2250 | Val rms_score: 0.8563
89
+ 2025-09-27 15:58:11,049 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 62/100 | Train Loss: 0.2069 | Val rms_score: 0.8519
90
+ 2025-09-27 15:58:17,111 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 63/100 | Train Loss: 0.2080 | Val rms_score: 0.8586
91
+ 2025-09-27 15:58:23,102 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 64/100 | Train Loss: 0.2056 | Val rms_score: 0.8524
92
+ 2025-09-27 15:58:29,098 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 65/100 | Train Loss: 0.2050 | Val rms_score: 0.8636
93
+ 2025-09-27 15:58:35,077 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 66/100 | Train Loss: 0.2014 | Val rms_score: 0.8566
94
+ 2025-09-27 15:58:42,495 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 67/100 | Train Loss: 0.2365 | Val rms_score: 0.8795
95
+ 2025-09-27 15:58:48,571 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 68/100 | Train Loss: 0.2125 | Val rms_score: 0.8701
96
+ 2025-09-27 15:58:54,546 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 69/100 | Train Loss: 0.2500 | Val rms_score: 0.8703
97
+ 2025-09-27 15:59:00,653 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 70/100 | Train Loss: 0.2069 | Val rms_score: 0.8733
98
+ 2025-09-27 15:59:06,533 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 71/100 | Train Loss: 0.2000 | Val rms_score: 0.8667
99
+ 2025-09-27 15:59:12,712 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 72/100 | Train Loss: 0.2000 | Val rms_score: 0.8558
100
+ 2025-09-27 15:59:18,753 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 73/100 | Train Loss: 0.1958 | Val rms_score: 0.8562
101
+ 2025-09-27 15:59:24,611 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 74/100 | Train Loss: 0.1990 | Val rms_score: 0.8709
102
+ 2025-09-27 15:59:30,770 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 75/100 | Train Loss: 0.1917 | Val rms_score: 0.8642
103
+ 2025-09-27 15:59:36,756 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 76/100 | Train Loss: 0.1859 | Val rms_score: 0.8632
104
+ 2025-09-27 15:59:43,147 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 77/100 | Train Loss: 0.1944 | Val rms_score: 0.8611
105
+ 2025-09-27 15:59:49,182 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 78/100 | Train Loss: 0.1914 | Val rms_score: 0.8622
106
+ 2025-09-27 15:59:55,276 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 79/100 | Train Loss: 0.1875 | Val rms_score: 0.8563
107
+ 2025-09-27 16:00:01,248 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 80/100 | Train Loss: 0.1833 | Val rms_score: 0.8613
108
+ 2025-09-27 16:00:07,309 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 81/100 | Train Loss: 0.1792 | Val rms_score: 0.8599
109
+ 2025-09-27 16:00:13,861 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 82/100 | Train Loss: 0.1917 | Val rms_score: 0.8592
110
+ 2025-09-27 16:00:19,778 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 83/100 | Train Loss: 0.1804 | Val rms_score: 0.8580
111
+ 2025-09-27 16:00:25,843 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 84/100 | Train Loss: 0.1903 | Val rms_score: 0.8584
112
+ 2025-09-27 16:00:31,757 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 85/100 | Train Loss: 0.1775 | Val rms_score: 0.8626
113
+ 2025-09-27 16:00:37,803 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 86/100 | Train Loss: 0.1819 | Val rms_score: 0.8616
114
+ 2025-09-27 16:00:44,147 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 87/100 | Train Loss: 0.1677 | Val rms_score: 0.8581
115
+ 2025-09-27 16:00:50,049 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 88/100 | Train Loss: 0.1757 | Val rms_score: 0.8593
116
+ 2025-09-27 16:00:57,087 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 89/100 | Train Loss: 0.2594 | Val rms_score: 0.8575
117
+ 2025-09-27 16:01:02,987 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 90/100 | Train Loss: 0.1847 | Val rms_score: 0.8661
118
+ 2025-09-27 16:01:09,085 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 91/100 | Train Loss: 0.1806 | Val rms_score: 0.8581
119
+ 2025-09-27 16:01:15,249 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 92/100 | Train Loss: 0.1805 | Val rms_score: 0.8563
120
+ 2025-09-27 16:01:21,155 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 93/100 | Train Loss: 0.1819 | Val rms_score: 0.8602
121
+ 2025-09-27 16:01:27,087 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 94/100 | Train Loss: 0.1760 | Val rms_score: 0.8646
122
+ 2025-09-27 16:01:33,078 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 95/100 | Train Loss: 0.1806 | Val rms_score: 0.8593
123
+ 2025-09-27 16:01:39,037 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 96/100 | Train Loss: 0.1836 | Val rms_score: 0.8577
124
+ 2025-09-27 16:01:45,464 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 97/100 | Train Loss: 0.1688 | Val rms_score: 0.8545
125
+ 2025-09-27 16:01:51,593 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 98/100 | Train Loss: 0.1812 | Val rms_score: 0.8532
126
+ 2025-09-27 16:01:57,556 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 99/100 | Train Loss: 0.1708 | Val rms_score: 0.8649
127
+ 2025-09-27 16:02:03,843 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 100/100 | Train Loss: 0.1729 | Val rms_score: 0.8603
128
+ 2025-09-27 16:02:04,433 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Test rms_score: 0.8737
129
+ 2025-09-27 16:02:04,829 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Starting triplicate run 2 for dataset astrazeneca_solubility at 2025-09-27_16-02-04
130
+ 2025-09-27 16:02:09,970 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 1/100 | Train Loss: 0.9667 | Val rms_score: 0.9804
131
+ 2025-09-27 16:02:09,970 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Global step of best model: 45
132
+ 2025-09-27 16:02:10,555 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 1 with val rms_score: 0.9804
133
+ 2025-09-27 16:02:18,913 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 2/100 | Train Loss: 0.8167 | Val rms_score: 0.9460
134
+ 2025-09-27 16:02:19,090 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Global step of best model: 90
135
+ 2025-09-27 16:02:19,697 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 2 with val rms_score: 0.9460
136
+ 2025-09-27 16:02:26,026 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 3/100 | Train Loss: 0.7464 | Val rms_score: 0.9266
137
+ 2025-09-27 16:02:26,215 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Global step of best model: 135
138
+ 2025-09-27 16:02:26,800 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 3 with val rms_score: 0.9266
139
+ 2025-09-27 16:02:33,196 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 4/100 | Train Loss: 0.6972 | Val rms_score: 0.9024
140
+ 2025-09-27 16:02:33,379 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Global step of best model: 180
141
+ 2025-09-27 16:02:33,963 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 4 with val rms_score: 0.9024
142
+ 2025-09-27 16:02:39,990 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 5/100 | Train Loss: 0.6600 | Val rms_score: 0.9029
143
+ 2025-09-27 16:02:46,021 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 6/100 | Train Loss: 0.6028 | Val rms_score: 0.8829
144
+ 2025-09-27 16:02:46,567 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Global step of best model: 270
145
+ 2025-09-27 16:02:47,148 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 6 with val rms_score: 0.8829
146
+ 2025-09-27 16:02:53,058 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 7/100 | Train Loss: 0.5292 | Val rms_score: 0.9081
147
+ 2025-09-27 16:02:59,019 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 8/100 | Train Loss: 0.5306 | Val rms_score: 0.8880
148
+ 2025-09-27 16:03:04,906 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 9/100 | Train Loss: 0.5375 | Val rms_score: 0.8750
149
+ 2025-09-27 16:03:05,095 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Global step of best model: 405
150
+ 2025-09-27 16:03:05,684 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 9 with val rms_score: 0.8750
151
+ 2025-09-27 16:03:11,805 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 10/100 | Train Loss: 0.4861 | Val rms_score: 0.8807
152
+ 2025-09-27 16:03:17,923 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 11/100 | Train Loss: 0.4611 | Val rms_score: 0.8653
153
+ 2025-09-27 16:03:18,594 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Global step of best model: 495
154
+ 2025-09-27 16:03:19,269 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 11 with val rms_score: 0.8653
155
+ 2025-09-27 16:03:25,264 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 12/100 | Train Loss: 0.4562 | Val rms_score: 0.8672
156
+ 2025-09-27 16:03:31,130 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 13/100 | Train Loss: 0.4389 | Val rms_score: 0.8579
157
+ 2025-09-27 16:03:31,310 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Global step of best model: 585
158
+ 2025-09-27 16:03:31,914 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 13 with val rms_score: 0.8579
159
+ 2025-09-27 16:03:37,917 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 14/100 | Train Loss: 0.4021 | Val rms_score: 0.8706
160
+ 2025-09-27 16:03:43,935 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 15/100 | Train Loss: 0.4139 | Val rms_score: 0.8526
161
+ 2025-09-27 16:03:44,125 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Global step of best model: 675
162
+ 2025-09-27 16:03:44,713 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 15 with val rms_score: 0.8526
163
+ 2025-09-27 16:03:50,797 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 16/100 | Train Loss: 0.3953 | Val rms_score: 0.8673
164
+ 2025-09-27 16:03:57,130 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 17/100 | Train Loss: 0.3917 | Val rms_score: 0.9063
165
+ 2025-09-27 16:04:04,123 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 18/100 | Train Loss: 0.3109 | Val rms_score: 0.8596
166
+ 2025-09-27 16:04:10,163 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 19/100 | Train Loss: 0.3500 | Val rms_score: 0.8558
167
+ 2025-09-27 16:04:16,035 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 20/100 | Train Loss: 0.3417 | Val rms_score: 0.8478
168
+ 2025-09-27 16:04:16,194 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Global step of best model: 900
169
+ 2025-09-27 16:04:16,791 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 20 with val rms_score: 0.8478
170
+ 2025-09-27 16:04:22,815 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 21/100 | Train Loss: 0.3403 | Val rms_score: 0.8717
171
+ 2025-09-27 16:04:29,029 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 22/100 | Train Loss: 0.3431 | Val rms_score: 0.8699
172
+ 2025-09-27 16:04:35,954 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 23/100 | Train Loss: 0.3696 | Val rms_score: 0.8896
173
+ 2025-09-27 16:04:42,050 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 24/100 | Train Loss: 0.3583 | Val rms_score: 0.8509
174
+ 2025-09-27 16:04:48,321 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 25/100 | Train Loss: 0.3250 | Val rms_score: 0.8534
175
+ 2025-09-27 16:04:54,218 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 26/100 | Train Loss: 0.3056 | Val rms_score: 0.8549
176
+ 2025-09-27 16:05:00,407 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 27/100 | Train Loss: 0.2854 | Val rms_score: 0.8582
177
+ 2025-09-27 16:05:06,594 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 28/100 | Train Loss: 0.2986 | Val rms_score: 0.8558
178
+ 2025-09-27 16:05:12,491 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 29/100 | Train Loss: 0.3047 | Val rms_score: 0.8639
179
+ 2025-09-27 16:05:18,341 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 30/100 | Train Loss: 0.2819 | Val rms_score: 0.8543
180
+ 2025-09-27 16:05:24,395 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 31/100 | Train Loss: 0.2792 | Val rms_score: 0.8491
181
+ 2025-09-27 16:05:30,950 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 32/100 | Train Loss: 0.2797 | Val rms_score: 0.8466
182
+ 2025-09-27 16:05:31,107 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Global step of best model: 1440
183
+ 2025-09-27 16:05:31,707 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 32 with val rms_score: 0.8466
184
+ 2025-09-27 16:05:37,573 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 33/100 | Train Loss: 0.2722 | Val rms_score: 0.8624
185
+ 2025-09-27 16:05:43,465 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 34/100 | Train Loss: 0.2656 | Val rms_score: 0.8584
186
+ 2025-09-27 16:05:49,929 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 35/100 | Train Loss: 0.2653 | Val rms_score: 0.8550
187
+ 2025-09-27 16:05:55,748 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 36/100 | Train Loss: 0.2641 | Val rms_score: 0.8516
188
+ 2025-09-27 16:06:02,079 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 37/100 | Train Loss: 0.2569 | Val rms_score: 0.8537
189
+ 2025-09-27 16:06:08,281 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 38/100 | Train Loss: 0.2313 | Val rms_score: 0.8477
190
+ 2025-09-27 16:06:14,286 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 39/100 | Train Loss: 0.2528 | Val rms_score: 0.8557
191
+ 2025-09-27 16:06:20,295 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 40/100 | Train Loss: 0.2528 | Val rms_score: 0.8461
192
+ 2025-09-27 16:06:20,461 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Global step of best model: 1800
193
+ 2025-09-27 16:06:21,060 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 40 with val rms_score: 0.8461
194
+ 2025-09-27 16:06:27,186 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 41/100 | Train Loss: 0.2639 | Val rms_score: 0.8514
195
+ 2025-09-27 16:06:33,430 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 42/100 | Train Loss: 0.2542 | Val rms_score: 0.8489
196
+ 2025-09-27 16:06:39,439 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 43/100 | Train Loss: 0.2607 | Val rms_score: 0.8632
197
+ 2025-09-27 16:06:45,645 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 44/100 | Train Loss: 0.2375 | Val rms_score: 0.8477
198
+ 2025-09-27 16:06:52,801 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 45/100 | Train Loss: 0.2675 | Val rms_score: 0.8640
199
+ 2025-09-27 16:06:58,736 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 46/100 | Train Loss: 0.2417 | Val rms_score: 0.8577
200
+ 2025-09-27 16:07:04,952 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 47/100 | Train Loss: 0.2062 | Val rms_score: 0.8611
201
+ 2025-09-27 16:07:10,708 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 48/100 | Train Loss: 0.2361 | Val rms_score: 0.8430
202
+ 2025-09-27 16:07:10,865 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Global step of best model: 2160
203
+ 2025-09-27 16:07:11,454 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 48 with val rms_score: 0.8430
204
+ 2025-09-27 16:07:17,356 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 49/100 | Train Loss: 0.1992 | Val rms_score: 0.8583
205
+ 2025-09-27 16:07:23,511 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 50/100 | Train Loss: 0.2208 | Val rms_score: 0.8640
206
+ 2025-09-27 16:07:29,574 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 51/100 | Train Loss: 0.2278 | Val rms_score: 0.8675
207
+ 2025-09-27 16:07:36,027 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 52/100 | Train Loss: 0.2313 | Val rms_score: 0.8864
208
+ 2025-09-27 16:07:42,067 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 53/100 | Train Loss: 0.2250 | Val rms_score: 0.8580
209
+ 2025-09-27 16:07:47,934 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 54/100 | Train Loss: 0.2240 | Val rms_score: 0.8599
210
+ 2025-09-27 16:07:53,837 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 55/100 | Train Loss: 0.2181 | Val rms_score: 0.8526
211
+ 2025-09-27 16:07:59,717 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 56/100 | Train Loss: 0.2266 | Val rms_score: 0.8645
212
+ 2025-09-27 16:08:06,127 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 57/100 | Train Loss: 0.2111 | Val rms_score: 0.8705
213
+ 2025-09-27 16:08:12,217 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 58/100 | Train Loss: 0.2359 | Val rms_score: 0.8568
214
+ 2025-09-27 16:08:18,300 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 59/100 | Train Loss: 0.2111 | Val rms_score: 0.8525
215
+ 2025-09-27 16:08:24,451 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 60/100 | Train Loss: 0.2056 | Val rms_score: 0.8539
216
+ 2025-09-27 16:08:30,334 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 61/100 | Train Loss: 0.2000 | Val rms_score: 0.8558
217
+ 2025-09-27 16:08:36,620 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 62/100 | Train Loss: 0.1986 | Val rms_score: 0.8581
218
+ 2025-09-27 16:08:42,506 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 63/100 | Train Loss: 0.2125 | Val rms_score: 0.8458
219
+ 2025-09-27 16:08:48,435 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 64/100 | Train Loss: 0.2000 | Val rms_score: 0.8435
220
+ 2025-09-27 16:08:54,447 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 65/100 | Train Loss: 0.2037 | Val rms_score: 0.8438
221
+ 2025-09-27 16:09:00,741 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 66/100 | Train Loss: 0.1917 | Val rms_score: 0.8491
222
+ 2025-09-27 16:09:08,753 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 67/100 | Train Loss: 0.1865 | Val rms_score: 0.8505
223
+ 2025-09-27 16:09:14,722 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 68/100 | Train Loss: 0.1861 | Val rms_score: 0.8459
224
+ 2025-09-27 16:09:20,843 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 69/100 | Train Loss: 0.1688 | Val rms_score: 0.8616
225
+ 2025-09-27 16:09:28,519 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 70/100 | Train Loss: 0.1889 | Val rms_score: 0.8451
226
+ 2025-09-27 16:09:34,281 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 71/100 | Train Loss: 0.1875 | Val rms_score: 0.8555
227
+ 2025-09-27 16:09:41,951 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 72/100 | Train Loss: 0.1961 | Val rms_score: 0.8499
228
+ 2025-09-27 16:09:49,138 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 73/100 | Train Loss: 0.1819 | Val rms_score: 0.8496
229
+ 2025-09-27 16:09:55,637 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 74/100 | Train Loss: 0.1667 | Val rms_score: 0.8510
230
+ 2025-09-27 16:10:01,748 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 75/100 | Train Loss: 0.1889 | Val rms_score: 0.8501
231
+ 2025-09-27 16:10:07,635 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 76/100 | Train Loss: 0.1898 | Val rms_score: 0.8466
232
+ 2025-09-27 16:10:14,112 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 77/100 | Train Loss: 0.1792 | Val rms_score: 0.8503
233
+ 2025-09-27 16:10:20,446 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 78/100 | Train Loss: 0.1953 | Val rms_score: 0.8557
234
+ 2025-09-27 16:10:26,562 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 79/100 | Train Loss: 0.1806 | Val rms_score: 0.8648
235
+ 2025-09-27 16:10:34,469 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 80/100 | Train Loss: 0.1806 | Val rms_score: 0.8586
236
+ 2025-09-27 16:10:40,617 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 81/100 | Train Loss: 0.1847 | Val rms_score: 0.8614
237
+ 2025-09-27 16:10:46,943 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 82/100 | Train Loss: 0.1944 | Val rms_score: 0.8480
238
+ 2025-09-27 16:10:52,757 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 83/100 | Train Loss: 0.1786 | Val rms_score: 0.8563
239
+ 2025-09-27 16:10:58,718 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 84/100 | Train Loss: 0.1819 | Val rms_score: 0.8516
240
+ 2025-09-27 16:11:04,653 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 85/100 | Train Loss: 0.1775 | Val rms_score: 0.8671
241
+ 2025-09-27 16:11:10,713 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 86/100 | Train Loss: 0.1688 | Val rms_score: 0.8459
242
+ 2025-09-27 16:11:17,353 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 87/100 | Train Loss: 0.2281 | Val rms_score: 0.8414
243
+ 2025-09-27 16:11:17,510 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Global step of best model: 3915
244
+ 2025-09-27 16:11:18,121 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 87 with val rms_score: 0.8414
245
+ 2025-09-27 16:11:24,130 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 88/100 | Train Loss: 0.1875 | Val rms_score: 0.8366
246
+ 2025-09-27 16:11:24,345 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Global step of best model: 3960
247
+ 2025-09-27 16:11:24,936 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 88 with val rms_score: 0.8366
248
+ 2025-09-27 16:11:31,973 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 89/100 | Train Loss: 0.1898 | Val rms_score: 0.8415
249
+ 2025-09-27 16:11:37,961 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 90/100 | Train Loss: 0.1722 | Val rms_score: 0.8459
250
+ 2025-09-27 16:11:43,898 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 91/100 | Train Loss: 0.1715 | Val rms_score: 0.8578
251
+ 2025-09-27 16:11:50,144 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 92/100 | Train Loss: 0.1617 | Val rms_score: 0.8536
252
+ 2025-09-27 16:11:56,113 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 93/100 | Train Loss: 0.1660 | Val rms_score: 0.8491
253
+ 2025-09-27 16:12:02,359 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 94/100 | Train Loss: 0.1615 | Val rms_score: 0.8474
254
+ 2025-09-27 16:12:08,401 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 95/100 | Train Loss: 0.1653 | Val rms_score: 0.8447
255
+ 2025-09-27 16:12:14,395 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 96/100 | Train Loss: 0.1672 | Val rms_score: 0.8556
256
+ 2025-09-27 16:12:20,723 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 97/100 | Train Loss: 0.1667 | Val rms_score: 0.8601
257
+ 2025-09-27 16:12:26,671 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 98/100 | Train Loss: 0.1437 | Val rms_score: 0.8565
258
+ 2025-09-27 16:12:32,592 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 99/100 | Train Loss: 0.1632 | Val rms_score: 0.8498
259
+ 2025-09-27 16:12:38,561 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 100/100 | Train Loss: 0.1646 | Val rms_score: 0.8492
260
+ 2025-09-27 16:12:39,183 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Test rms_score: 0.8669
261
+ 2025-09-27 16:12:39,556 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Starting triplicate run 3 for dataset astrazeneca_solubility at 2025-09-27_16-12-39
262
+ 2025-09-27 16:12:44,706 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 1/100 | Train Loss: 0.9556 | Val rms_score: 0.9661
263
+ 2025-09-27 16:12:44,706 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Global step of best model: 45
264
+ 2025-09-27 16:12:46,569 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 1 with val rms_score: 0.9661
265
+ 2025-09-27 16:12:53,003 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 2/100 | Train Loss: 0.8333 | Val rms_score: 0.9435
266
+ 2025-09-27 16:12:53,174 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Global step of best model: 90
267
+ 2025-09-27 16:12:53,780 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 2 with val rms_score: 0.9435
268
+ 2025-09-27 16:12:59,856 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 3/100 | Train Loss: 0.7536 | Val rms_score: 0.9216
269
+ 2025-09-27 16:13:00,041 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Global step of best model: 135
270
+ 2025-09-27 16:13:00,624 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 3 with val rms_score: 0.9216
271
+ 2025-09-27 16:13:06,527 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 4/100 | Train Loss: 0.7028 | Val rms_score: 0.9090
272
+ 2025-09-27 16:13:06,716 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Global step of best model: 180
273
+ 2025-09-27 16:13:07,294 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 4 with val rms_score: 0.9090
274
+ 2025-09-27 16:13:13,290 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 5/100 | Train Loss: 0.6100 | Val rms_score: 0.8931
275
+ 2025-09-27 16:13:13,478 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Global step of best model: 225
276
+ 2025-09-27 16:13:14,067 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 5 with val rms_score: 0.8931
277
+ 2025-09-27 16:13:20,082 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 6/100 | Train Loss: 0.6167 | Val rms_score: 0.8801
278
+ 2025-09-27 16:13:20,622 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Global step of best model: 270
279
+ 2025-09-27 16:13:21,206 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 6 with val rms_score: 0.8801
280
+ 2025-09-27 16:13:28,652 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 7/100 | Train Loss: 0.5000 | Val rms_score: 0.8980
281
+ 2025-09-27 16:13:34,824 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 8/100 | Train Loss: 0.5389 | Val rms_score: 0.8726
282
+ 2025-09-27 16:13:35,006 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Global step of best model: 360
283
+ 2025-09-27 16:13:35,602 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 8 with val rms_score: 0.8726
284
+ 2025-09-27 16:13:41,740 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 9/100 | Train Loss: 0.5344 | Val rms_score: 0.8702
285
+ 2025-09-27 16:13:41,929 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Global step of best model: 405
286
+ 2025-09-27 16:13:42,521 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 9 with val rms_score: 0.8702
287
+ 2025-09-27 16:13:48,917 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 10/100 | Train Loss: 0.4972 | Val rms_score: 0.8736
288
+ 2025-09-27 16:13:54,913 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 11/100 | Train Loss: 0.4722 | Val rms_score: 0.8656
289
+ 2025-09-27 16:13:55,457 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Global step of best model: 495
290
+ 2025-09-27 16:13:56,061 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 11 with val rms_score: 0.8656
291
+ 2025-09-27 16:14:02,045 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 12/100 | Train Loss: 0.4688 | Val rms_score: 0.8769
292
+ 2025-09-27 16:14:08,139 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 13/100 | Train Loss: 0.4417 | Val rms_score: 0.8642
293
+ 2025-09-27 16:14:08,329 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Global step of best model: 585
294
+ 2025-09-27 16:14:08,920 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 13 with val rms_score: 0.8642
295
+ 2025-09-27 16:14:15,002 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 14/100 | Train Loss: 0.4208 | Val rms_score: 0.8657
296
+ 2025-09-27 16:14:20,884 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 15/100 | Train Loss: 0.4167 | Val rms_score: 0.8622
297
+ 2025-09-27 16:14:21,066 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Global step of best model: 675
298
+ 2025-09-27 16:14:21,660 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 15 with val rms_score: 0.8622
299
+ 2025-09-27 16:14:27,748 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 16/100 | Train Loss: 0.4125 | Val rms_score: 0.8495
300
+ 2025-09-27 16:14:28,339 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Global step of best model: 720
301
+ 2025-09-27 16:14:28,944 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 16 with val rms_score: 0.8495
302
+ 2025-09-27 16:14:34,785 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 17/100 | Train Loss: 0.3861 | Val rms_score: 0.8534
303
+ 2025-09-27 16:14:40,813 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 18/100 | Train Loss: 0.3922 | Val rms_score: 0.8673
304
+ 2025-09-27 16:14:46,651 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 19/100 | Train Loss: 0.3750 | Val rms_score: 0.8619
305
+ 2025-09-27 16:14:52,739 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 20/100 | Train Loss: 0.3778 | Val rms_score: 0.8537
306
+ 2025-09-27 16:14:58,771 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 21/100 | Train Loss: 0.3472 | Val rms_score: 0.8549
307
+ 2025-09-27 16:15:05,079 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 22/100 | Train Loss: 0.3417 | Val rms_score: 0.8556
308
+ 2025-09-27 16:15:12,189 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 23/100 | Train Loss: 0.3750 | Val rms_score: 0.8651
309
+ 2025-09-27 16:15:18,169 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 24/100 | Train Loss: 0.3319 | Val rms_score: 0.8517
310
+ 2025-09-27 16:15:24,231 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 25/100 | Train Loss: 0.3187 | Val rms_score: 0.8616
311
+ 2025-09-27 16:15:30,164 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 26/100 | Train Loss: 0.3069 | Val rms_score: 0.8461
312
+ 2025-09-27 16:15:30,681 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Global step of best model: 1170
313
+ 2025-09-27 16:15:31,293 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 26 with val rms_score: 0.8461
314
+ 2025-09-27 16:15:37,597 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 27/100 | Train Loss: 0.3000 | Val rms_score: 0.8482
315
+ 2025-09-27 16:15:43,759 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 28/100 | Train Loss: 0.3014 | Val rms_score: 0.8690
316
+ 2025-09-27 16:15:49,756 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 29/100 | Train Loss: 0.3281 | Val rms_score: 0.8510
317
+ 2025-09-27 16:15:55,647 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 30/100 | Train Loss: 0.3069 | Val rms_score: 0.8468
318
+ 2025-09-27 16:16:01,549 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 31/100 | Train Loss: 0.2792 | Val rms_score: 0.8535
319
+ 2025-09-27 16:16:07,976 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 32/100 | Train Loss: 0.2734 | Val rms_score: 0.8905
320
+ 2025-09-27 16:16:14,012 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 33/100 | Train Loss: 0.2778 | Val rms_score: 0.8531
321
+ 2025-09-27 16:16:20,011 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 34/100 | Train Loss: 0.2833 | Val rms_score: 0.8491
322
+ 2025-09-27 16:16:26,469 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 35/100 | Train Loss: 0.2639 | Val rms_score: 0.8576
323
+ 2025-09-27 16:16:32,419 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 36/100 | Train Loss: 0.2812 | Val rms_score: 0.8583
324
+ 2025-09-27 16:16:38,733 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 37/100 | Train Loss: 0.2653 | Val rms_score: 0.8605
325
+ 2025-09-27 16:16:44,935 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 38/100 | Train Loss: 0.4406 | Val rms_score: 0.8366
326
+ 2025-09-27 16:16:45,092 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Global step of best model: 1710
327
+ 2025-09-27 16:16:45,809 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 38 with val rms_score: 0.8366
328
+ 2025-09-27 16:16:51,852 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 39/100 | Train Loss: 0.2708 | Val rms_score: 0.8377
329
+ 2025-09-27 16:16:57,735 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 40/100 | Train Loss: 0.2514 | Val rms_score: 0.8492
330
+ 2025-09-27 16:17:03,953 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 41/100 | Train Loss: 0.2472 | Val rms_score: 0.8533
331
+ 2025-09-27 16:17:10,549 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 42/100 | Train Loss: 0.2417 | Val rms_score: 0.8533
332
+ 2025-09-27 16:17:16,520 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 43/100 | Train Loss: 0.2500 | Val rms_score: 0.8586
333
+ 2025-09-27 16:17:22,420 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 44/100 | Train Loss: 0.2403 | Val rms_score: 0.8645
334
+ 2025-09-27 16:17:29,443 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 45/100 | Train Loss: 0.2350 | Val rms_score: 0.8572
335
+ 2025-09-27 16:17:35,269 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 46/100 | Train Loss: 0.2417 | Val rms_score: 0.8590
336
+ 2025-09-27 16:17:41,515 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 47/100 | Train Loss: 0.2323 | Val rms_score: 0.8804
337
+ 2025-09-27 16:17:47,774 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 48/100 | Train Loss: 0.2403 | Val rms_score: 0.8679
338
+ 2025-09-27 16:17:53,930 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 49/100 | Train Loss: 0.1828 | Val rms_score: 0.8607
339
+ 2025-09-27 16:18:00,167 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 50/100 | Train Loss: 0.2167 | Val rms_score: 0.8611
340
+ 2025-09-27 16:18:06,130 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 51/100 | Train Loss: 0.2194 | Val rms_score: 0.8776
341
+ 2025-09-27 16:18:12,908 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 52/100 | Train Loss: 0.2156 | Val rms_score: 0.8586
342
+ 2025-09-27 16:18:18,776 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 53/100 | Train Loss: 0.2181 | Val rms_score: 0.8588
343
+ 2025-09-27 16:18:24,719 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 54/100 | Train Loss: 0.2313 | Val rms_score: 0.8646
344
+ 2025-09-27 16:18:30,717 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 55/100 | Train Loss: 0.2125 | Val rms_score: 0.8625
345
+ 2025-09-27 16:18:36,752 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 56/100 | Train Loss: 0.2156 | Val rms_score: 0.8714
346
+ 2025-09-27 16:18:43,641 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 57/100 | Train Loss: 0.2167 | Val rms_score: 0.8595
347
+ 2025-09-27 16:18:49,556 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 58/100 | Train Loss: 0.2000 | Val rms_score: 0.8563
348
+ 2025-09-27 16:18:55,631 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 59/100 | Train Loss: 0.2111 | Val rms_score: 0.8561
349
+ 2025-09-27 16:19:01,485 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 60/100 | Train Loss: 0.2125 | Val rms_score: 0.8549
350
+ 2025-09-27 16:19:07,554 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 61/100 | Train Loss: 0.2042 | Val rms_score: 0.8475
351
+ 2025-09-27 16:19:14,017 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 62/100 | Train Loss: 0.2069 | Val rms_score: 0.8647
352
+ 2025-09-27 16:19:20,158 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 63/100 | Train Loss: 0.2009 | Val rms_score: 0.8578
353
+ 2025-09-27 16:19:26,278 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 64/100 | Train Loss: 0.1986 | Val rms_score: 0.8675
354
+ 2025-09-27 16:19:32,277 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 65/100 | Train Loss: 0.2100 | Val rms_score: 0.8558
355
+ 2025-09-27 16:19:38,246 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 66/100 | Train Loss: 0.1972 | Val rms_score: 0.8657
356
+ 2025-09-27 16:19:47,186 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 67/100 | Train Loss: 0.1938 | Val rms_score: 0.8694
357
+ 2025-09-27 16:19:52,961 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 68/100 | Train Loss: 0.2083 | Val rms_score: 0.8725
358
+ 2025-09-27 16:19:58,973 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 69/100 | Train Loss: 0.1703 | Val rms_score: 0.8592
359
+ 2025-09-27 16:20:04,964 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 70/100 | Train Loss: 0.1903 | Val rms_score: 0.8685
360
+ 2025-09-27 16:20:11,317 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 71/100 | Train Loss: 0.1917 | Val rms_score: 0.8562
361
+ 2025-09-27 16:20:17,738 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 72/100 | Train Loss: 0.1875 | Val rms_score: 0.8553
362
+ 2025-09-27 16:20:23,495 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 73/100 | Train Loss: 0.1986 | Val rms_score: 0.8553
363
+ 2025-09-27 16:20:29,554 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 74/100 | Train Loss: 0.1917 | Val rms_score: 0.8643
364
+ 2025-09-27 16:20:35,543 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 75/100 | Train Loss: 0.1861 | Val rms_score: 0.8494
365
+ 2025-09-27 16:20:42,015 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 76/100 | Train Loss: 0.1844 | Val rms_score: 0.8482
366
+ 2025-09-27 16:20:48,713 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 77/100 | Train Loss: 0.1847 | Val rms_score: 0.8569
367
+ 2025-09-27 16:20:54,922 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 78/100 | Train Loss: 0.1727 | Val rms_score: 0.8510
368
+ 2025-09-27 16:21:01,127 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 79/100 | Train Loss: 0.1833 | Val rms_score: 0.8558
369
+ 2025-09-27 16:21:07,004 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 80/100 | Train Loss: 0.1806 | Val rms_score: 0.8584
370
+ 2025-09-27 16:21:13,083 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 81/100 | Train Loss: 0.1861 | Val rms_score: 0.8665
371
+ 2025-09-27 16:21:19,283 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 82/100 | Train Loss: 0.1903 | Val rms_score: 0.8663
372
+ 2025-09-27 16:21:25,277 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 83/100 | Train Loss: 0.1830 | Val rms_score: 0.8621
373
+ 2025-09-27 16:21:31,115 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 84/100 | Train Loss: 0.1792 | Val rms_score: 0.8539
374
+ 2025-09-27 16:21:37,073 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 85/100 | Train Loss: 0.1875 | Val rms_score: 0.8528
375
+ 2025-09-27 16:21:43,289 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 86/100 | Train Loss: 0.1819 | Val rms_score: 0.8556
376
+ 2025-09-27 16:21:49,561 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 87/100 | Train Loss: 0.1740 | Val rms_score: 0.8579
377
+ 2025-09-27 16:21:55,600 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 88/100 | Train Loss: 0.1819 | Val rms_score: 0.8708
378
+ 2025-09-27 16:22:02,593 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 89/100 | Train Loss: 0.2375 | Val rms_score: 0.8743
379
+ 2025-09-27 16:22:08,432 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 90/100 | Train Loss: 0.1778 | Val rms_score: 0.8647
380
+ 2025-09-27 16:22:14,462 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 91/100 | Train Loss: 0.1819 | Val rms_score: 0.8631
381
+ 2025-09-27 16:22:20,750 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 92/100 | Train Loss: 0.1672 | Val rms_score: 0.8650
382
+ 2025-09-27 16:22:26,748 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 93/100 | Train Loss: 0.1729 | Val rms_score: 0.8603
383
+ 2025-09-27 16:22:32,855 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 94/100 | Train Loss: 0.1688 | Val rms_score: 0.8599
384
+ 2025-09-27 16:22:38,727 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 95/100 | Train Loss: 0.1861 | Val rms_score: 0.8615
385
+ 2025-09-27 16:22:44,718 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 96/100 | Train Loss: 0.1656 | Val rms_score: 0.8568
386
+ 2025-09-27 16:22:51,202 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 97/100 | Train Loss: 0.1847 | Val rms_score: 0.8586
387
+ 2025-09-27 16:22:57,142 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 98/100 | Train Loss: 0.1500 | Val rms_score: 0.8614
388
+ 2025-09-27 16:23:03,004 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 99/100 | Train Loss: 0.1653 | Val rms_score: 0.8666
389
+ 2025-09-27 16:23:09,058 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 100/100 | Train Loss: 0.1653 | Val rms_score: 0.8644
390
+ 2025-09-27 16:23:09,694 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Test rms_score: 0.8799
391
+ 2025-09-27 16:23:10,125 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Final Triplicate Test Results — Avg rms_score: 0.8735, Std Dev: 0.0053