eacortes commited on
Commit
f34cb4a
·
verified ·
1 Parent(s): c852481

Update README and add additional benchmarking logs

Browse files
Files changed (14) hide show
  1. README.md +184 -18
  2. logs_modchembert_classification_ModChemBERT-MLM-DAPT/modchembert_deepchem_splits_run_antimalarial_epochs100_batch_size32_20250926_005715.log +361 -0
  3. logs_modchembert_classification_ModChemBERT-MLM-DAPT/modchembert_deepchem_splits_run_cocrystal_epochs100_batch_size32_20250926_032557.log +351 -0
  4. logs_modchembert_classification_ModChemBERT-MLM-DAPT/modchembert_deepchem_splits_run_covid19_epochs100_batch_size32_20250925_210847.log +347 -0
  5. logs_modchembert_regression_ModChemBERT-MLM-DAPT/modchembert_deepchem_splits_run_adme_microsom_stab_h_epochs100_batch_size32_20250926_053825.log +351 -0
  6. logs_modchembert_regression_ModChemBERT-MLM-DAPT/modchembert_deepchem_splits_run_adme_microsom_stab_r_epochs100_batch_size32_20250926_075143.log +337 -0
  7. logs_modchembert_regression_ModChemBERT-MLM-DAPT/modchembert_deepchem_splits_run_adme_permeability_epochs100_batch_size32_20250926_090956.log +365 -0
  8. logs_modchembert_regression_ModChemBERT-MLM-DAPT/modchembert_deepchem_splits_run_adme_ppb_h_epochs100_batch_size32_20250926_103701.log +331 -0
  9. logs_modchembert_regression_ModChemBERT-MLM-DAPT/modchembert_deepchem_splits_run_adme_ppb_r_epochs100_batch_size32_20250926_104920.log +333 -0
  10. logs_modchembert_regression_ModChemBERT-MLM-DAPT/modchembert_deepchem_splits_run_adme_solubility_epochs100_batch_size32_20250926_110128.log +341 -0
  11. logs_modchembert_regression_ModChemBERT-MLM-DAPT/modchembert_deepchem_splits_run_astrazeneca_cl_epochs100_batch_size32_20250926_121606.log +319 -0
  12. logs_modchembert_regression_ModChemBERT-MLM-DAPT/modchembert_deepchem_splits_run_astrazeneca_logd74_epochs100_batch_size32_20250926_131838.log +411 -0
  13. logs_modchembert_regression_ModChemBERT-MLM-DAPT/modchembert_deepchem_splits_run_astrazeneca_ppb_epochs100_batch_size32_20250926_152951.log +327 -0
  14. logs_modchembert_regression_ModChemBERT-MLM-DAPT/modchembert_deepchem_splits_run_astrazeneca_solubility_epochs100_batch_size32_20250926_155606.log +379 -0
README.md CHANGED
@@ -116,6 +116,123 @@ model-index:
116
  metrics:
117
  - type: rmse
118
  value: 0.6874
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
119
  ---
120
 
121
  # ModChemBERT: ModernBERT as a Chemical Language Model
@@ -157,10 +274,10 @@ print(fill("c1ccccc1[MASK]"))
157
  - Encoder Layers: 22
158
  - Attention heads: 12
159
  - Max sequence length: 256 tokens (MLM primarily trained with 128-token sequences)
160
- - Vocabulary: BPE tokenizer using [MolFormer's vocab](https://github.com/emapco/ModChemBERT/blob/main/modchembert/tokenizers/molformer/vocab.json) (2362 tokens)
161
 
162
  ## Pooling (Classifier / Regressor Head)
163
- Kallergis et al. [1] demonstrated that the CLM embedding method prior to the prediction head can significantly impact downstream performance.
164
 
165
  Behrendt et al. [2] noted that the last few layers contain task-specific information and that pooling methods leveraging information from multiple layers can enhance model performance. Their results further demonstrated that the `max_seq_mha` pooling method was particularly effective in low-data regimes, which is often the case for molecular property prediction tasks.
166
 
@@ -176,6 +293,9 @@ Multiple pooling strategies are supported by ModChemBERT to explore their impact
176
  - `mean_sum`: Mean over all layers then sum tokens
177
  - `max_seq_mean`: Max over last k layers then mean tokens
178
 
 
 
 
179
  ## Training Pipeline
180
  <div align="center">
181
  <img src="https://cdn-uploads.huggingface.co/production/uploads/656892962693fa22e18b5331/bxNbpgMkU8m60ypyEJoWQ.png" alt="ModChemBERT Training Pipeline" width="650"/>
@@ -188,23 +308,33 @@ Following Sultan et al. [3], multi-task regression (physicochemical properties)
188
  Inspired by ModernBERT [4], JaColBERTv2.5 [5], and Llama 3.1 [6], where results show that model merging can enhance generalization or performance while mitigating overfitting to any single fine-tune or annealing checkpoint.
189
 
190
  ## Datasets
191
- - Pretraining: [Derify/augmented_canonical_druglike_QED_Pfizer_15M](https://huggingface.co/datasets/Derify/augmented_canonical_druglike_QED_Pfizer_15M)
192
- - Domain Adaptive Pretraining (DAPT) & Task Adaptive Fine-tuning (TAFT): ADME + AstraZeneca datasets (10 tasks) with scaffold splits from DA4MT pipeline (see [domain-adaptation-molecular-transformers](https://github.com/emapco/ModChemBERT/tree/main/domain-adaptation-molecular-transformers))
193
- - Benchmarking: ChemBERTa-3 [7] tasks (BACE, BBBP, TOX21, HIV, SIDER, CLINTOX for classification; ESOL, FREESOLV, LIPO, BACE, CLEARANCE for regression)
 
 
 
 
 
 
 
 
194
 
195
  ## Benchmarking
196
- Benchmarks were conducted with the ChemBERTa-3 framework using DeepChem scaffold splits. Each task was trained for 100 epochs with 3 random seeds.
 
 
197
 
198
  ### Evaluation Methodology
199
- - Classification Metric: ROC AUC.
200
- - Regression Metric: RMSE.
201
  - Aggregation: Mean ± standard deviation of the triplicate results.
202
- - Input Constraints: SMILES truncated / filtered to ≤200 tokens, following the MolFormer paper's recommendation.
203
 
204
  ### Results
205
  <details><summary>Click to expand</summary>
206
 
207
- #### Classification Datasets (ROC AUC - Higher is better)
208
 
209
  | Model | BACE↑ | BBBP↑ | CLINTOX↑ | HIV↑ | SIDER↑ | TOX21↑ | AVG† |
210
  | ---------------------------------------------------------------------------- | ----------------- | ----------------- | --------------------- | --------------------- | --------------------- | ----------------- | ------ |
@@ -212,14 +342,14 @@ Benchmarks were conducted with the ChemBERTa-3 framework using DeepChem scaffold
212
  | [ChemBERTa-100M-MLM](https://huggingface.co/DeepChem/ChemBERTa-100M-MLM)* | 0.781 ± 0.019 | 0.700 ± 0.027 | 0.979 ± 0.022 | 0.740 ± 0.013 | 0.611 ± 0.002 | 0.718 ± 0.011 | 0.7548 |
213
  | [c3-MoLFormer-1.1B](https://huggingface.co/DeepChem/MoLFormer-c3-1.1B)* | 0.819 ± 0.019 | 0.735 ± 0.019 | 0.839 ± 0.013 | 0.762 ± 0.005 | 0.618 ± 0.005 | 0.723 ± 0.012 | 0.7493 |
214
  | MoLFormer-LHPC* | **0.887 ± 0.004** | **0.908 ± 0.013** | 0.993 ± 0.004 | 0.750 ± 0.003 | 0.622 ± 0.007 | **0.791 ± 0.014** | 0.8252 |
215
- | ------------------------- | ----------------- | ----------------- | ------------------- | ------------------- | ------------------- | ----------------- | ------ |
216
  | [MLM](https://huggingface.co/Derify/ModChemBERT-MLM) | 0.8065 ± 0.0103 | 0.7222 ± 0.0150 | 0.9709 ± 0.0227 | ***0.7800 ± 0.0133*** | 0.6419 ± 0.0113 | 0.7400 ± 0.0044 | 0.7769 |
217
  | [MLM + DAPT](https://huggingface.co/Derify/ModChemBERT-MLM-DAPT) | 0.8224 ± 0.0156 | 0.7402 ± 0.0095 | 0.9820 ± 0.0138 | 0.7702 ± 0.0020 | 0.6303 ± 0.0039 | 0.7360 ± 0.0036 | 0.7802 |
218
  | [MLM + TAFT](https://huggingface.co/Derify/ModChemBERT-MLM-TAFT) | 0.7924 ± 0.0155 | 0.7282 ± 0.0058 | 0.9725 ± 0.0213 | 0.7770 ± 0.0047 | 0.6542 ± 0.0128 | *0.7646 ± 0.0039* | 0.7815 |
219
  | [MLM + DAPT + TAFT](https://huggingface.co/Derify/ModChemBERT-MLM-DAPT-TAFT) | 0.8213 ± 0.0051 | 0.7356 ± 0.0094 | 0.9664 ± 0.0202 | 0.7750 ± 0.0048 | 0.6415 ± 0.0094 | 0.7263 ± 0.0036 | 0.7777 |
220
  | [MLM + DAPT + TAFT OPT](https://huggingface.co/Derify/ModChemBERT) | *0.8346 ± 0.0045* | *0.7573 ± 0.0120* | ***0.9938 ± 0.0017*** | 0.7737 ± 0.0034 | ***0.6600 ± 0.0061*** | 0.7518 ± 0.0047 | 0.7952 |
221
 
222
- #### Regression Datasets (RMSE - Lower is better)
223
 
224
  | Model | BACE↓ | CLEARANCE↓ | ESOL↓ | FREESOLV↓ | LIPO↓ | AVG‡ |
225
  | ---------------------------------------------------------------------------- | --------------------- | ---------------------- | --------------------- | --------------------- | --------------------- | ---------------- |
@@ -227,17 +357,45 @@ Benchmarks were conducted with the ChemBERTa-3 framework using DeepChem scaffold
227
  | [ChemBERTa-100M-MLM](https://huggingface.co/DeepChem/ChemBERTa-100M-MLM)* | 1.011 ± 0.038 | 51.582 ± 3.079 | 0.920 ± 0.011 | 0.536 ± 0.016 | 0.758 ± 0.013 | 0.8063 / 10.9614 |
228
  | [c3-MoLFormer-1.1B](https://huggingface.co/DeepChem/MoLFormer-c3-1.1B)* | 1.094 ± 0.126 | 52.058 ± 2.767 | 0.829 ± 0.019 | 0.572 ± 0.023 | 0.728 ± 0.016 | 0.8058 / 11.0562 |
229
  | MoLFormer-LHPC* | 1.201 ± 0.100 | 45.74 ± 2.637 | 0.848 ± 0.031 | 0.683 ± 0.040 | 0.895 ± 0.080 | 0.9068 / 9.8734 |
230
- | ------------------------- | ------------------- | -------------------- | ------------------- | ------------------- | ------------------- | ---------------- |
231
  | [MLM](https://huggingface.co/Derify/ModChemBERT-MLM) | 1.0893 ± 0.1319 | 49.0005 ± 1.2787 | 0.8456 ± 0.0406 | 0.5491 ± 0.0134 | 0.7147 ± 0.0062 | 0.7997 / 10.4398 |
232
  | [MLM + DAPT](https://huggingface.co/Derify/ModChemBERT-MLM-DAPT) | 0.9931 ± 0.0258 | 45.4951 ± 0.7112 | 0.9319 ± 0.0153 | 0.6049 ± 0.0666 | 0.6874 ± 0.0040 | 0.8043 / 9.7425 |
233
  | [MLM + TAFT](https://huggingface.co/Derify/ModChemBERT-MLM-TAFT) | 1.0304 ± 0.1146 | 47.8418 ± 0.4070 | ***0.7669 ± 0.0024*** | 0.5293 ± 0.0267 | 0.6708 ± 0.0074 | 0.7493 / 10.1678 |
234
  | [MLM + DAPT + TAFT](https://huggingface.co/Derify/ModChemBERT-MLM-DAPT-TAFT) | 0.9713 ± 0.0224 | ***42.8010 ± 3.3475*** | 0.8169 ± 0.0268 | 0.5445 ± 0.0257 | 0.6820 ± 0.0028 | 0.7537 / 9.1631 |
235
  | [MLM + DAPT + TAFT OPT](https://huggingface.co/Derify/ModChemBERT) | ***0.9665 ± 0.0250*** | 44.0137 ± 1.1110 | 0.8158 ± 0.0115 | ***0.4979 ± 0.0158*** | ***0.6505 ± 0.0126*** | 0.7327 / 9.3889 |
236
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
237
  **Bold** indicates the best result in the column; *italic* indicates the best result among ModChemBERT checkpoints.<br/>
238
  \* Published results from the ChemBERTa-3 [7] paper for optimized chemical language models using DeepChem scaffold splits.<br/>
239
- † AVG column shows the mean score across all classification tasks.<br/>
240
- ‡ AVG column shows the mean scores across all regression tasks without and with the clearance score.
241
 
242
  </details>
243
 
@@ -277,6 +435,9 @@ Optimal parameters (per dataset) for the `MLM + DAPT + TAFT OPT` merged model:
277
  | esol | 64 | sum_mean | N/A | 0.1 | 0.0 | 0.1 |
278
  | freesolv | 32 | max_seq_mha | 5 | 0.1 | 0.0 | 0.0 |
279
  | lipo | 32 | max_seq_mha | 3 | 0.1 | 0.1 | 0.1 |
 
 
 
280
 
281
  </details>
282
 
@@ -310,10 +471,15 @@ If you use ModChemBERT in your research, please cite the checkpoint and the foll
310
  ```
311
 
312
  ## References
313
- 1. Kallergis, Georgios, et al. "Domain adaptable language modeling of chemical compounds identifies potent pathoblockers for Pseudomonas aeruginosa." Communications Chemistry 8.1 (2025): 114.
314
  2. Behrendt, Maike, Stefan Sylvius Wagner, and Stefan Harmeling. "MaxPoolBERT: Enhancing BERT Classification via Layer-and Token-Wise Aggregation." arXiv preprint arXiv:2505.15696 (2025).
315
  3. Sultan, Afnan, et al. "Transformers for molecular property prediction: Domain adaptation efficiently improves performance." arXiv preprint arXiv:2503.03360 (2025).
316
  4. Warner, Benjamin, et al. "Smarter, better, faster, longer: A modern bidirectional encoder for fast, memory efficient, and long context finetuning and inference." arXiv preprint arXiv:2412.13663 (2024).
317
- 5. Clavié, Benjamin. "JaColBERTv2.5: Optimising Multi-Vector Retrievers to Create State-of-the-Art Japanese Retrievers with Constrained Resources." Journal of Natural Language Processing 32.1 (2025): 176-218.
318
  6. Grattafiori, Aaron, et al. "The llama 3 herd of models." arXiv preprint arXiv:2407.21783 (2024).
319
- 7. Singh, Riya, et al. "ChemBERTa-3: An Open Source Training Framework for Chemical Foundation Models." (2025).
 
 
 
 
 
 
116
  metrics:
117
  - type: rmse
118
  value: 0.6874
119
+ - task:
120
+ type: text-classification
121
+ name: Classification (ROC AUC)
122
+ dataset:
123
+ name: Antimalarial
124
+ type: Antimalarial
125
+ metrics:
126
+ - type: roc_auc
127
+ value: 0.8756
128
+ - task:
129
+ type: text-classification
130
+ name: Classification (ROC AUC)
131
+ dataset:
132
+ name: Cocrystal
133
+ type: Cocrystal
134
+ metrics:
135
+ - type: roc_auc
136
+ value: 0.8288
137
+ - task:
138
+ type: text-classification
139
+ name: Classification (ROC AUC)
140
+ dataset:
141
+ name: COVID19
142
+ type: COVID19
143
+ metrics:
144
+ - type: roc_auc
145
+ value: 0.8029
146
+ - task:
147
+ type: regression
148
+ name: Regression (RMSE)
149
+ dataset:
150
+ name: ADME microsom stab human
151
+ type: ADME
152
+ metrics:
153
+ - type: rmse
154
+ value: 0.4199
155
+ - task:
156
+ type: regression
157
+ name: Regression (RMSE)
158
+ dataset:
159
+ name: ADME microsom stab rat
160
+ type: ADME
161
+ metrics:
162
+ - type: rmse
163
+ value: 0.4568
164
+ - task:
165
+ type: regression
166
+ name: Regression (RMSE)
167
+ dataset:
168
+ name: ADME permeability
169
+ type: ADME
170
+ metrics:
171
+ - type: rmse
172
+ value: 0.5042
173
+ - task:
174
+ type: regression
175
+ name: Regression (RMSE)
176
+ dataset:
177
+ name: ADME ppb human
178
+ type: ADME
179
+ metrics:
180
+ - type: rmse
181
+ value: 0.8376
182
+ - task:
183
+ type: regression
184
+ name: Regression (RMSE)
185
+ dataset:
186
+ name: ADME ppb rat
187
+ type: ADME
188
+ metrics:
189
+ - type: rmse
190
+ value: 0.8446
191
+ - task:
192
+ type: regression
193
+ name: Regression (RMSE)
194
+ dataset:
195
+ name: ADME solubility
196
+ type: ADME
197
+ metrics:
198
+ - type: rmse
199
+ value: 0.4800
200
+ - task:
201
+ type: regression
202
+ name: Regression (RMSE)
203
+ dataset:
204
+ name: AstraZeneca CL
205
+ type: AstraZeneca
206
+ metrics:
207
+ - type: rmse
208
+ value: 0.5351
209
+ - task:
210
+ type: regression
211
+ name: Regression (RMSE)
212
+ dataset:
213
+ name: AstraZeneca LogD74
214
+ type: AstraZeneca
215
+ metrics:
216
+ - type: rmse
217
+ value: 0.8191
218
+ - task:
219
+ type: regression
220
+ name: Regression (RMSE)
221
+ dataset:
222
+ name: AstraZeneca PPB
223
+ type: AstraZeneca
224
+ metrics:
225
+ - type: rmse
226
+ value: 0.1237
227
+ - task:
228
+ type: regression
229
+ name: Regression (RMSE)
230
+ dataset:
231
+ name: AstraZeneca Solubility
232
+ type: AstraZeneca
233
+ metrics:
234
+ - type: rmse
235
+ value: 0.9280
236
  ---
237
 
238
  # ModChemBERT: ModernBERT as a Chemical Language Model
 
274
  - Encoder Layers: 22
275
  - Attention heads: 12
276
  - Max sequence length: 256 tokens (MLM primarily trained with 128-token sequences)
277
+ - Tokenizer: BPE tokenizer using [MolFormer's vocab](https://github.com/emapco/ModChemBERT/blob/main/modchembert/tokenizers/molformer/vocab.json) (2362 tokens)
278
 
279
  ## Pooling (Classifier / Regressor Head)
280
+ Kallergis et al. [1] demonstrated that the CLM embedding method prior to the prediction head was the strongest contributor to downstream performance among evaluated hyperparameters.
281
 
282
  Behrendt et al. [2] noted that the last few layers contain task-specific information and that pooling methods leveraging information from multiple layers can enhance model performance. Their results further demonstrated that the `max_seq_mha` pooling method was particularly effective in low-data regimes, which is often the case for molecular property prediction tasks.
283
 
 
293
  - `mean_sum`: Mean over all layers then sum tokens
294
  - `max_seq_mean`: Max over last k layers then mean tokens
295
 
296
+ Note: ModChemBERT’s `max_seq_mha` differs from MaxPoolBERT [2]. MaxPoolBERT uses PyTorch `nn.MultiheadAttention`, whereas ModChemBERT's `ModChemBertPoolingAttention` adapts ModernBERT’s `ModernBertAttention`.
297
+ On ChemBERTa-3 benchmarks this variant produced stronger validation metrics and avoided the training instabilities (sporadic zero / NaN losses and gradient norms) seen with `nn.MultiheadAttention`. Training instability with ModernBERT has been reported in the past ([discussion 1](https://huggingface.co/answerdotai/ModernBERT-base/discussions/59) and [discussion 2](https://huggingface.co/answerdotai/ModernBERT-base/discussions/63)).
298
+
299
  ## Training Pipeline
300
  <div align="center">
301
  <img src="https://cdn-uploads.huggingface.co/production/uploads/656892962693fa22e18b5331/bxNbpgMkU8m60ypyEJoWQ.png" alt="ModChemBERT Training Pipeline" width="650"/>
 
308
  Inspired by ModernBERT [4], JaColBERTv2.5 [5], and Llama 3.1 [6], where results show that model merging can enhance generalization or performance while mitigating overfitting to any single fine-tune or annealing checkpoint.
309
 
310
  ## Datasets
311
+ - Pretraining: [Derify/augmented_canonical_druglike_QED_Pfizer_15M](https://huggingface.co/datasets/Derify/augmented_canonical_druglike_QED_Pfizer_15M) (canonical_smiles column)
312
+ - Domain Adaptive Pretraining (DAPT) & Task Adaptive Fine-tuning (TAFT): ADME (6 tasks) + AstraZeneca (4 tasks) datasets that are split using DA4MT's [3] Bemis-Murcko scaffold splitter (see [domain-adaptation-molecular-transformers](https://github.com/emapco/ModChemBERT/blob/main/domain-adaptation-molecular-transformers/da4mt/splitting.py))
313
+ - Benchmarking:
314
+ - ChemBERTa-3 [7]
315
+ - classification: BACE, BBBP, TOX21, HIV, SIDER, CLINTOX
316
+ - regression: ESOL, FREESOLV, LIPO, BACE, CLEARANCE
317
+ - Mswahili, et al. [8] proposed additional datasets for benchmarking chemical language models:
318
+ - classification: Antimalarial [9], Cocrystal [10], COVID19 [11]
319
+ - DAPT/TAFT stage regression datasets:
320
+ - ADME [12]: adme_microsom_stab_h, adme_microsom_stab_r, adme_permeability, adme_ppb_h, adme_ppb_r, adme_solubility
321
+ - AstraZeneca: astrazeneca_CL, astrazeneca_LogD74, astrazeneca_PPB, astrazeneca_Solubility
322
 
323
  ## Benchmarking
324
+ Benchmarks were conducted using the ChemBERTa-3 framework. DeepChem scaffold splits were utilized for all datasets, with the exception of the Antimalarial dataset, which employed a random split. Each task was trained for 100 epochs, with results averaged across 3 random seeds.
325
+
326
+ The complete hyperparameter configurations for these benchmarks are available here: [ChemBERTa3 configs](https://github.com/emapco/ModChemBERT/tree/main/conf/chemberta3)
327
 
328
  ### Evaluation Methodology
329
+ - Classification Metric: ROC AUC
330
+ - Regression Metric: RMSE
331
  - Aggregation: Mean ± standard deviation of the triplicate results.
332
+ - Input Constraints: SMILES truncated / filtered to ≤200 tokens, following ChemBERTa-3's recommendation.
333
 
334
  ### Results
335
  <details><summary>Click to expand</summary>
336
 
337
+ #### ChemBERTa-3 Classification Datasets (ROC AUC - Higher is better)
338
 
339
  | Model | BACE↑ | BBBP↑ | CLINTOX↑ | HIV↑ | SIDER↑ | TOX21↑ | AVG† |
340
  | ---------------------------------------------------------------------------- | ----------------- | ----------------- | --------------------- | --------------------- | --------------------- | ----------------- | ------ |
 
342
  | [ChemBERTa-100M-MLM](https://huggingface.co/DeepChem/ChemBERTa-100M-MLM)* | 0.781 ± 0.019 | 0.700 ± 0.027 | 0.979 ± 0.022 | 0.740 ± 0.013 | 0.611 ± 0.002 | 0.718 ± 0.011 | 0.7548 |
343
  | [c3-MoLFormer-1.1B](https://huggingface.co/DeepChem/MoLFormer-c3-1.1B)* | 0.819 ± 0.019 | 0.735 ± 0.019 | 0.839 ± 0.013 | 0.762 ± 0.005 | 0.618 ± 0.005 | 0.723 ± 0.012 | 0.7493 |
344
  | MoLFormer-LHPC* | **0.887 ± 0.004** | **0.908 ± 0.013** | 0.993 ± 0.004 | 0.750 ± 0.003 | 0.622 ± 0.007 | **0.791 ± 0.014** | 0.8252 |
345
+ | | | | | | | | |
346
  | [MLM](https://huggingface.co/Derify/ModChemBERT-MLM) | 0.8065 ± 0.0103 | 0.7222 ± 0.0150 | 0.9709 ± 0.0227 | ***0.7800 ± 0.0133*** | 0.6419 ± 0.0113 | 0.7400 ± 0.0044 | 0.7769 |
347
  | [MLM + DAPT](https://huggingface.co/Derify/ModChemBERT-MLM-DAPT) | 0.8224 ± 0.0156 | 0.7402 ± 0.0095 | 0.9820 ± 0.0138 | 0.7702 ± 0.0020 | 0.6303 ± 0.0039 | 0.7360 ± 0.0036 | 0.7802 |
348
  | [MLM + TAFT](https://huggingface.co/Derify/ModChemBERT-MLM-TAFT) | 0.7924 ± 0.0155 | 0.7282 ± 0.0058 | 0.9725 ± 0.0213 | 0.7770 ± 0.0047 | 0.6542 ± 0.0128 | *0.7646 ± 0.0039* | 0.7815 |
349
  | [MLM + DAPT + TAFT](https://huggingface.co/Derify/ModChemBERT-MLM-DAPT-TAFT) | 0.8213 ± 0.0051 | 0.7356 ± 0.0094 | 0.9664 ± 0.0202 | 0.7750 ± 0.0048 | 0.6415 ± 0.0094 | 0.7263 ± 0.0036 | 0.7777 |
350
  | [MLM + DAPT + TAFT OPT](https://huggingface.co/Derify/ModChemBERT) | *0.8346 ± 0.0045* | *0.7573 ± 0.0120* | ***0.9938 ± 0.0017*** | 0.7737 ± 0.0034 | ***0.6600 ± 0.0061*** | 0.7518 ± 0.0047 | 0.7952 |
351
 
352
+ #### ChemBERTa-3 Regression Datasets (RMSE - Lower is better)
353
 
354
  | Model | BACE↓ | CLEARANCE↓ | ESOL↓ | FREESOLV↓ | LIPO↓ | AVG‡ |
355
  | ---------------------------------------------------------------------------- | --------------------- | ---------------------- | --------------------- | --------------------- | --------------------- | ---------------- |
 
357
  | [ChemBERTa-100M-MLM](https://huggingface.co/DeepChem/ChemBERTa-100M-MLM)* | 1.011 ± 0.038 | 51.582 ± 3.079 | 0.920 ± 0.011 | 0.536 ± 0.016 | 0.758 ± 0.013 | 0.8063 / 10.9614 |
358
  | [c3-MoLFormer-1.1B](https://huggingface.co/DeepChem/MoLFormer-c3-1.1B)* | 1.094 ± 0.126 | 52.058 ± 2.767 | 0.829 ± 0.019 | 0.572 ± 0.023 | 0.728 ± 0.016 | 0.8058 / 11.0562 |
359
  | MoLFormer-LHPC* | 1.201 ± 0.100 | 45.74 ± 2.637 | 0.848 ± 0.031 | 0.683 ± 0.040 | 0.895 ± 0.080 | 0.9068 / 9.8734 |
360
+ | | | | | | |
361
  | [MLM](https://huggingface.co/Derify/ModChemBERT-MLM) | 1.0893 ± 0.1319 | 49.0005 ± 1.2787 | 0.8456 ± 0.0406 | 0.5491 ± 0.0134 | 0.7147 ± 0.0062 | 0.7997 / 10.4398 |
362
  | [MLM + DAPT](https://huggingface.co/Derify/ModChemBERT-MLM-DAPT) | 0.9931 ± 0.0258 | 45.4951 ± 0.7112 | 0.9319 ± 0.0153 | 0.6049 ± 0.0666 | 0.6874 ± 0.0040 | 0.8043 / 9.7425 |
363
  | [MLM + TAFT](https://huggingface.co/Derify/ModChemBERT-MLM-TAFT) | 1.0304 ± 0.1146 | 47.8418 ± 0.4070 | ***0.7669 ± 0.0024*** | 0.5293 ± 0.0267 | 0.6708 ± 0.0074 | 0.7493 / 10.1678 |
364
  | [MLM + DAPT + TAFT](https://huggingface.co/Derify/ModChemBERT-MLM-DAPT-TAFT) | 0.9713 ± 0.0224 | ***42.8010 ± 3.3475*** | 0.8169 ± 0.0268 | 0.5445 ± 0.0257 | 0.6820 ± 0.0028 | 0.7537 / 9.1631 |
365
  | [MLM + DAPT + TAFT OPT](https://huggingface.co/Derify/ModChemBERT) | ***0.9665 ± 0.0250*** | 44.0137 ± 1.1110 | 0.8158 ± 0.0115 | ***0.4979 ± 0.0158*** | ***0.6505 ± 0.0126*** | 0.7327 / 9.3889 |
366
 
367
+ #### Mswahili, et al. [8] Proposed Classification Datasets (ROC AUC - Higher is better)
368
+
369
+ | Model | Antimalarial↑ | Cocrystal↑ | COVID19↑ | AVG† |
370
+ | ---------------------------------------------------------------------------- | --------------------- | --------------------- | --------------------- | ------ |
371
+ | **Tasks** | 1 | 1 | 1 | |
372
+ | [MLM](https://huggingface.co/Derify/ModChemBERT-MLM) | 0.8707 ± 0.0032 | 0.7967 ± 0.0124 | 0.8106 ± 0.0170 | 0.8260 |
373
+ | [MLM + DAPT](https://huggingface.co/Derify/ModChemBERT-MLM-DAPT) | 0.8756 ± 0.0056 | 0.8288 ± 0.0143 | 0.8029 ± 0.0159 | 0.8358 |
374
+ | [MLM + TAFT](https://huggingface.co/Derify/ModChemBERT-MLM-TAFT) | 0.8832 ± 0.0051 | 0.7866 ± 0.0204 | ***0.8308 ± 0.0026*** | 0.8335 |
375
+ | [MLM + DAPT + TAFT](https://huggingface.co/Derify/ModChemBERT-MLM-DAPT-TAFT) | 0.8819 ± 0.0052 | 0.8550 ± 0.0106 | 0.8013 ± 0.0118 | 0.8461 |
376
+ | [MLM + DAPT + TAFT OPT](https://huggingface.co/Derify/ModChemBERT) | ***0.8966 ± 0.0045*** | ***0.8654 ± 0.0080*** | 0.8132 ± 0.0195 | 0.8584 |
377
+
378
+ #### ADME/AstraZeneca Regression Datasets (RMSE - Lower is better)
379
+
380
+ Hyperparameter optimization for the TAFT stage appears to induce overfitting, as the `MLM + DAPT + TAFT OPT` model shows slightly degraded performance on the ADME/AstraZeneca datasets compared to the `MLM + DAPT + TAFT` model.
381
+ The `MLM + DAPT + TAFT` model, a merge of unoptimized TAFT checkpoints trained with `max_seq_mean` pooling, achieved the best overall performance across the ADME/AstraZeneca datasets.
382
+
383
+ | | ADME | | | | | | AstraZeneca | | | | |
384
+ | ---------------------------------------------------------------------------- | ------------------- | ------------------- | ------------------- | ------------------- | ------------------- | ------------------- | ------------------- | ------------------- | ------------------- | ------------------- | ------ |
385
+ | Model | microsom_stab_h↓ | microsom_stab_r↓ | permeability↓ | ppb_h↓ | ppb_r↓ | solubility↓ | CL↓ | LogD74↓ | PPB↓ | Solubility↓ | AVG† |
386
+ | | | | | | | | | | | |
387
+ | **Tasks** | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | |
388
+ | [MLM](https://huggingface.co/Derify/ModChemBERT-MLM) | 0.4489 ± 0.0114 | 0.4685 ± 0.0225 | 0.5423 ± 0.0076 | 0.8041 ± 0.0378 | 0.7849 ± 0.0394 | 0.5191 ± 0.0147 | **0.4812 ± 0.0073** | 0.8204 ± 0.0070 | 0.1365 ± 0.0066 | 0.9614 ± 0.0189 | 0.5967 |
389
+ | [MLM + DAPT](https://huggingface.co/Derify/ModChemBERT-MLM-DAPT) | **0.4199 ± 0.0064** | 0.4568 ± 0.0091 | 0.5042 ± 0.0135 | 0.8376 ± 0.0629 | 0.8446 ± 0.0756 | 0.4800 ± 0.0118 | 0.5351 ± 0.0036 | 0.8191 ± 0.0066 | 0.1237 ± 0.0022 | 0.9280 ± 0.0088 | 0.5949 |
390
+ | [MLM + TAFT](https://huggingface.co/Derify/ModChemBERT-MLM-TAFT) | 0.4375 ± 0.0027 | 0.4542 ± 0.0024 | 0.5202 ± 0.0141 | **0.7618 ± 0.0138** | 0.7027 ± 0.0023 | 0.5023 ± 0.0107 | 0.5104 ± 0.0110 | 0.7599 ± 0.0050 | 0.1233 ± 0.0088 | 0.8730 ± 0.0112 | 0.5645 |
391
+ | [MLM + DAPT + TAFT](https://huggingface.co/Derify/ModChemBERT-MLM-DAPT-TAFT) | 0.4206 ± 0.0071 | **0.4400 ± 0.0039** | **0.4899 ± 0.0068** | 0.8927 ± 0.0163 | **0.6942 ± 0.0397** | 0.4641 ± 0.0082 | 0.5022 ± 0.0136 | **0.7467 ± 0.0041** | 0.1195 ± 0.0026 | **0.8564 ± 0.0265** | 0.5626 |
392
+ | [MLM + DAPT + TAFT OPT](https://huggingface.co/Derify/ModChemBERT) | 0.4248 ± 0.0041 | 0.4403 ± 0.0046 | 0.5025 ± 0.0029 | 0.8901 ± 0.0123 | 0.7268 ± 0.0090 | **0.4627 ± 0.0083** | 0.4932 ± 0.0079 | 0.7596 ± 0.0044 | **0.1150 ± 0.0002** | 0.8735 ± 0.0053 | 0.5689 |
393
+
394
+
395
  **Bold** indicates the best result in the column; *italic* indicates the best result among ModChemBERT checkpoints.<br/>
396
  \* Published results from the ChemBERTa-3 [7] paper for optimized chemical language models using DeepChem scaffold splits.<br/>
397
+ † AVG column shows the mean score across classification tasks.<br/>
398
+ ‡ AVG column shows the mean scores across regression tasks without and with the clearance score.
399
 
400
  </details>
401
 
 
435
  | esol | 64 | sum_mean | N/A | 0.1 | 0.0 | 0.1 |
436
  | freesolv | 32 | max_seq_mha | 5 | 0.1 | 0.0 | 0.0 |
437
  | lipo | 32 | max_seq_mha | 3 | 0.1 | 0.1 | 0.1 |
438
+ | antimalarial | 16 | max_seq_mha | 3 | 0.1 | 0.1 | 0.1 |
439
+ | cocrystal | 16 | max_cls | 3 | 0.1 | 0.0 | 0.1 |
440
+ | covid19 | 16 | sum_mean | N/A | 0.1 | 0.0 | 0.1 |
441
 
442
  </details>
443
 
 
471
  ```
472
 
473
  ## References
474
+ 1. Kallergis, G., Asgari, E., Empting, M. et al. Domain adaptable language modeling of chemical compounds identifies potent pathoblockers for Pseudomonas aeruginosa. Commun Chem 8, 114 (2025). https://doi.org/10.1038/s42004-025-01484-4
475
  2. Behrendt, Maike, Stefan Sylvius Wagner, and Stefan Harmeling. "MaxPoolBERT: Enhancing BERT Classification via Layer-and Token-Wise Aggregation." arXiv preprint arXiv:2505.15696 (2025).
476
  3. Sultan, Afnan, et al. "Transformers for molecular property prediction: Domain adaptation efficiently improves performance." arXiv preprint arXiv:2503.03360 (2025).
477
  4. Warner, Benjamin, et al. "Smarter, better, faster, longer: A modern bidirectional encoder for fast, memory efficient, and long context finetuning and inference." arXiv preprint arXiv:2412.13663 (2024).
478
+ 5. Clavié, Benjamin. "JaColBERTv2.5: Optimising Multi-Vector Retrievers to Create State-of-the-Art Japanese Retrievers with Constrained Resources." arXiv preprint arXiv:2407.20750 (2024).
479
  6. Grattafiori, Aaron, et al. "The llama 3 herd of models." arXiv preprint arXiv:2407.21783 (2024).
480
+ 7. Singh R, Barsainyan AA, Irfan R, Amorin CJ, He S, Davis T, et al. ChemBERTa-3: An Open Source Training Framework for Chemical Foundation Models. ChemRxiv. 2025; doi:10.26434/chemrxiv-2025-4glrl-v2 This content is a preprint and has not been peer-reviewed.
481
+ 8. Mswahili, M.E., Hwang, J., Rajapakse, J.C. et al. Positional embeddings and zero-shot learning using BERT for molecular-property prediction. J Cheminform 17, 17 (2025). https://doi.org/10.1186/s13321-025-00959-9
482
+ 9. Mswahili, M.E.; Ndomba, G.E.; Jo, K.; Jeong, Y.-S. Graph Neural Network and BERT Model for Antimalarial Drug Predictions Using Plasmodium Potential Targets. Applied Sciences, 2024, 14(4), 1472. https://doi.org/10.3390/app14041472
483
+ 10. Mswahili, M.E.; Lee, M.-J.; Martin, G.L.; Kim, J.; Kim, P.; Choi, G.J.; Jeong, Y.-S. Cocrystal Prediction Using Machine Learning Models and Descriptors. Applied Sciences, 2021, 11, 1323. https://doi.org/10.3390/app11031323
484
+ 11. Harigua-Souiai, E.; Heinhane, M.M.; Abdelkrim, Y.Z.; Souiai, O.; Abdeljaoued-Tej, I.; Guizani, I. Deep Learning Algorithms Achieved Satisfactory Predictions When Trained on a Novel Collection of Anticoronavirus Molecules. Frontiers in Genetics, 2021, 12:744170. https://doi.org/10.3389/fgene.2021.744170
485
+ 12. Cheng Fang, Ye Wang, Richard Grater, Sudarshan Kapadnis, Cheryl Black, Patrick Trapa, and Simone Sciabola. "Prospective Validation of Machine Learning Algorithms for Absorption, Distribution, Metabolism, and Excretion Prediction: An Industrial Perspective" Journal of Chemical Information and Modeling 2023 63 (11), 3263-3274 https://doi.org/10.1021/acs.jcim.3c00160
logs_modchembert_classification_ModChemBERT-MLM-DAPT/modchembert_deepchem_splits_run_antimalarial_epochs100_batch_size32_20250926_005715.log ADDED
@@ -0,0 +1,361 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ 2025-09-26 00:57:15,135 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Running benchmark for dataset: antimalarial
2
+ 2025-09-26 00:57:15,135 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - dataset: antimalarial, tasks: ['label'], epochs: 100, learning rate: 3e-05
3
+ 2025-09-26 00:57:15,140 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Starting triplicate run 1 for dataset antimalarial at 2025-09-26_00-57-15
4
+ 2025-09-26 00:57:23,562 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 1/100 | Train Loss: 0.5531 | Val mean-roc_auc_score: 0.7571
5
+ 2025-09-26 00:57:23,563 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Global step of best model: 120
6
+ 2025-09-26 00:57:24,563 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Best model saved at epoch 1 with val mean-roc_auc_score: 0.7571
7
+ 2025-09-26 00:57:35,572 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 2/100 | Train Loss: 0.4875 | Val mean-roc_auc_score: 0.8352
8
+ 2025-09-26 00:57:35,775 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Global step of best model: 240
9
+ 2025-09-26 00:57:36,390 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Best model saved at epoch 2 with val mean-roc_auc_score: 0.8352
10
+ 2025-09-26 00:57:48,540 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 3/100 | Train Loss: 0.4375 | Val mean-roc_auc_score: 0.8662
11
+ 2025-09-26 00:57:48,733 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Global step of best model: 360
12
+ 2025-09-26 00:57:49,394 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Best model saved at epoch 3 with val mean-roc_auc_score: 0.8662
13
+ 2025-09-26 00:58:03,526 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 4/100 | Train Loss: 0.3531 | Val mean-roc_auc_score: 0.8795
14
+ 2025-09-26 00:58:03,739 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Global step of best model: 480
15
+ 2025-09-26 00:58:04,375 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Best model saved at epoch 4 with val mean-roc_auc_score: 0.8795
16
+ 2025-09-26 00:58:15,440 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 5/100 | Train Loss: 0.2762 | Val mean-roc_auc_score: 0.8795
17
+ 2025-09-26 00:58:29,780 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 6/100 | Train Loss: 0.2375 | Val mean-roc_auc_score: 0.8822
18
+ 2025-09-26 00:58:30,245 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Global step of best model: 720
19
+ 2025-09-26 00:58:30,848 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Best model saved at epoch 6 with val mean-roc_auc_score: 0.8822
20
+ 2025-09-26 00:58:42,061 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 7/100 | Train Loss: 0.1797 | Val mean-roc_auc_score: 0.8897
21
+ 2025-09-26 00:58:42,263 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Global step of best model: 840
22
+ 2025-09-26 00:58:42,851 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Best model saved at epoch 7 with val mean-roc_auc_score: 0.8897
23
+ 2025-09-26 00:58:57,545 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 8/100 | Train Loss: 0.1562 | Val mean-roc_auc_score: 0.8868
24
+ 2025-09-26 00:59:10,339 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 9/100 | Train Loss: 0.1289 | Val mean-roc_auc_score: 0.8848
25
+ 2025-09-26 00:59:24,562 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 10/100 | Train Loss: 0.1500 | Val mean-roc_auc_score: 0.8979
26
+ 2025-09-26 00:59:24,730 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Global step of best model: 1200
27
+ 2025-09-26 00:59:25,421 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Best model saved at epoch 10 with val mean-roc_auc_score: 0.8979
28
+ 2025-09-26 00:59:39,341 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 11/100 | Train Loss: 0.1180 | Val mean-roc_auc_score: 0.8976
29
+ 2025-09-26 00:59:50,120 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 12/100 | Train Loss: 0.0906 | Val mean-roc_auc_score: 0.8992
30
+ 2025-09-26 00:59:50,323 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Global step of best model: 1440
31
+ 2025-09-26 00:59:50,961 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Best model saved at epoch 12 with val mean-roc_auc_score: 0.8992
32
+ 2025-09-26 01:00:05,894 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 13/100 | Train Loss: 0.0823 | Val mean-roc_auc_score: 0.9092
33
+ 2025-09-26 01:00:06,101 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Global step of best model: 1560
34
+ 2025-09-26 01:00:06,883 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Best model saved at epoch 13 with val mean-roc_auc_score: 0.9092
35
+ 2025-09-26 01:00:17,943 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 14/100 | Train Loss: 0.0723 | Val mean-roc_auc_score: 0.8934
36
+ 2025-09-26 01:00:32,218 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 15/100 | Train Loss: 0.0731 | Val mean-roc_auc_score: 0.8981
37
+ 2025-09-26 01:00:43,821 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 16/100 | Train Loss: 0.0531 | Val mean-roc_auc_score: 0.9016
38
+ 2025-09-26 01:00:59,434 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 17/100 | Train Loss: 0.0535 | Val mean-roc_auc_score: 0.9051
39
+ 2025-09-26 01:01:10,391 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 18/100 | Train Loss: 0.0456 | Val mean-roc_auc_score: 0.9009
40
+ 2025-09-26 01:01:24,587 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 19/100 | Train Loss: 0.0373 | Val mean-roc_auc_score: 0.9045
41
+ 2025-09-26 01:01:38,453 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 20/100 | Train Loss: 0.0387 | Val mean-roc_auc_score: 0.8968
42
+ 2025-09-26 01:01:50,243 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 21/100 | Train Loss: 0.0350 | Val mean-roc_auc_score: 0.8941
43
+ 2025-09-26 01:02:04,363 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 22/100 | Train Loss: 0.0328 | Val mean-roc_auc_score: 0.8958
44
+ 2025-09-26 01:02:15,731 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 23/100 | Train Loss: 0.0293 | Val mean-roc_auc_score: 0.8991
45
+ 2025-09-26 01:02:29,550 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 24/100 | Train Loss: 0.0277 | Val mean-roc_auc_score: 0.8940
46
+ 2025-09-26 01:02:41,875 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 25/100 | Train Loss: 0.0327 | Val mean-roc_auc_score: 0.8990
47
+ 2025-09-26 01:02:56,689 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 26/100 | Train Loss: 0.0223 | Val mean-roc_auc_score: 0.8970
48
+ 2025-09-26 01:03:08,813 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 27/100 | Train Loss: 0.0436 | Val mean-roc_auc_score: 0.8928
49
+ 2025-09-26 01:03:23,562 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 28/100 | Train Loss: 0.0272 | Val mean-roc_auc_score: 0.8988
50
+ 2025-09-26 01:03:34,760 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 29/100 | Train Loss: 0.0188 | Val mean-roc_auc_score: 0.8928
51
+ 2025-09-26 01:03:49,196 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 30/100 | Train Loss: 0.0196 | Val mean-roc_auc_score: 0.8914
52
+ 2025-09-26 01:04:04,577 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 31/100 | Train Loss: 0.0293 | Val mean-roc_auc_score: 0.8973
53
+ 2025-09-26 01:04:16,352 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 32/100 | Train Loss: 0.0152 | Val mean-roc_auc_score: 0.8944
54
+ 2025-09-26 01:04:30,219 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 33/100 | Train Loss: 0.0149 | Val mean-roc_auc_score: 0.8945
55
+ 2025-09-26 01:04:43,027 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 34/100 | Train Loss: 0.0171 | Val mean-roc_auc_score: 0.8969
56
+ 2025-09-26 01:04:56,269 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 35/100 | Train Loss: 0.0127 | Val mean-roc_auc_score: 0.8935
57
+ 2025-09-26 01:05:07,712 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 36/100 | Train Loss: 0.0127 | Val mean-roc_auc_score: 0.8929
58
+ 2025-09-26 01:05:22,407 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 37/100 | Train Loss: 0.0352 | Val mean-roc_auc_score: 0.8910
59
+ 2025-09-26 01:05:34,610 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 38/100 | Train Loss: 0.0201 | Val mean-roc_auc_score: 0.8960
60
+ 2025-09-26 01:05:49,406 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 39/100 | Train Loss: 0.0275 | Val mean-roc_auc_score: 0.8979
61
+ 2025-09-26 01:06:00,985 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 40/100 | Train Loss: 0.0152 | Val mean-roc_auc_score: 0.8962
62
+ 2025-09-26 01:06:15,204 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 41/100 | Train Loss: 0.0244 | Val mean-roc_auc_score: 0.8961
63
+ 2025-09-26 01:06:31,904 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 42/100 | Train Loss: 0.0285 | Val mean-roc_auc_score: 0.8961
64
+ 2025-09-26 01:06:43,877 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 43/100 | Train Loss: 0.0167 | Val mean-roc_auc_score: 0.9018
65
+ 2025-09-26 01:06:58,549 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 44/100 | Train Loss: 0.0155 | Val mean-roc_auc_score: 0.9014
66
+ 2025-09-26 01:07:10,460 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 45/100 | Train Loss: 0.0141 | Val mean-roc_auc_score: 0.8970
67
+ 2025-09-26 01:07:25,456 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 46/100 | Train Loss: 0.0105 | Val mean-roc_auc_score: 0.8999
68
+ 2025-09-26 01:07:37,641 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 47/100 | Train Loss: 0.0132 | Val mean-roc_auc_score: 0.9008
69
+ 2025-09-26 01:07:51,581 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 48/100 | Train Loss: 0.0104 | Val mean-roc_auc_score: 0.8976
70
+ 2025-09-26 01:08:03,007 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 49/100 | Train Loss: 0.0184 | Val mean-roc_auc_score: 0.8962
71
+ 2025-09-26 01:08:19,010 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 50/100 | Train Loss: 0.0124 | Val mean-roc_auc_score: 0.8992
72
+ 2025-09-26 01:08:31,318 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 51/100 | Train Loss: 0.0089 | Val mean-roc_auc_score: 0.8987
73
+ 2025-09-26 01:08:46,194 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 52/100 | Train Loss: 0.0136 | Val mean-roc_auc_score: 0.8939
74
+ 2025-09-26 01:08:57,537 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 53/100 | Train Loss: 0.0111 | Val mean-roc_auc_score: 0.8968
75
+ 2025-09-26 01:09:12,664 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 54/100 | Train Loss: 0.0091 | Val mean-roc_auc_score: 0.8966
76
+ 2025-09-26 01:09:27,943 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 55/100 | Train Loss: 0.0068 | Val mean-roc_auc_score: 0.8940
77
+ 2025-09-26 01:09:40,048 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 56/100 | Train Loss: 0.0097 | Val mean-roc_auc_score: 0.8954
78
+ 2025-09-26 01:09:54,670 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 57/100 | Train Loss: 0.0099 | Val mean-roc_auc_score: 0.8946
79
+ 2025-09-26 01:10:07,061 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 58/100 | Train Loss: 0.0089 | Val mean-roc_auc_score: 0.8964
80
+ 2025-09-26 01:10:23,118 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 59/100 | Train Loss: 0.0102 | Val mean-roc_auc_score: 0.8924
81
+ 2025-09-26 01:10:34,130 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 60/100 | Train Loss: 0.0099 | Val mean-roc_auc_score: 0.8958
82
+ 2025-09-26 01:10:48,462 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 61/100 | Train Loss: 0.0039 | Val mean-roc_auc_score: 0.8978
83
+ 2025-09-26 01:11:00,340 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 62/100 | Train Loss: 0.0095 | Val mean-roc_auc_score: 0.8954
84
+ 2025-09-26 01:11:14,527 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 63/100 | Train Loss: 0.0077 | Val mean-roc_auc_score: 0.8954
85
+ 2025-09-26 01:11:30,322 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 64/100 | Train Loss: 0.0063 | Val mean-roc_auc_score: 0.8955
86
+ 2025-09-26 01:11:41,347 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 65/100 | Train Loss: 0.0059 | Val mean-roc_auc_score: 0.8961
87
+ 2025-09-26 01:11:55,800 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 66/100 | Train Loss: 0.0068 | Val mean-roc_auc_score: 0.8987
88
+ 2025-09-26 01:12:12,307 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 67/100 | Train Loss: 0.0070 | Val mean-roc_auc_score: 0.8971
89
+ 2025-09-26 01:12:23,057 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 68/100 | Train Loss: 0.0066 | Val mean-roc_auc_score: 0.8954
90
+ 2025-09-26 01:12:37,443 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 69/100 | Train Loss: 0.0055 | Val mean-roc_auc_score: 0.8952
91
+ 2025-09-26 01:12:52,446 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 70/100 | Train Loss: 0.0054 | Val mean-roc_auc_score: 0.8953
92
+ 2025-09-26 01:13:04,178 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 71/100 | Train Loss: 0.0016 | Val mean-roc_auc_score: 0.8956
93
+ 2025-09-26 01:13:20,391 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 72/100 | Train Loss: 0.0065 | Val mean-roc_auc_score: 0.8957
94
+ 2025-09-26 01:13:33,060 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 73/100 | Train Loss: 0.0069 | Val mean-roc_auc_score: 0.8968
95
+ 2025-09-26 01:13:47,997 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 74/100 | Train Loss: 0.0041 | Val mean-roc_auc_score: 0.8969
96
+ 2025-09-26 01:14:01,665 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 75/100 | Train Loss: 0.0051 | Val mean-roc_auc_score: 0.8962
97
+ 2025-09-26 01:14:16,886 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 76/100 | Train Loss: 0.0105 | Val mean-roc_auc_score: 0.8939
98
+ 2025-09-26 01:14:28,515 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 77/100 | Train Loss: 0.0117 | Val mean-roc_auc_score: 0.8910
99
+ 2025-09-26 01:14:43,467 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 78/100 | Train Loss: 0.0058 | Val mean-roc_auc_score: 0.8923
100
+ 2025-09-26 01:14:55,407 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 79/100 | Train Loss: 0.0047 | Val mean-roc_auc_score: 0.8918
101
+ 2025-09-26 01:15:09,546 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 80/100 | Train Loss: 0.0064 | Val mean-roc_auc_score: 0.8927
102
+ 2025-09-26 01:15:21,607 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 81/100 | Train Loss: 0.0043 | Val mean-roc_auc_score: 0.8919
103
+ 2025-09-26 01:15:37,381 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 82/100 | Train Loss: 0.0150 | Val mean-roc_auc_score: 0.8894
104
+ 2025-09-26 01:15:50,713 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 83/100 | Train Loss: 0.0167 | Val mean-roc_auc_score: 0.8909
105
+ 2025-09-26 01:16:08,463 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 84/100 | Train Loss: 0.0119 | Val mean-roc_auc_score: 0.8922
106
+ 2025-09-26 01:16:21,104 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 85/100 | Train Loss: 0.0145 | Val mean-roc_auc_score: 0.8946
107
+ 2025-09-26 01:16:36,464 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 86/100 | Train Loss: 0.0081 | Val mean-roc_auc_score: 0.8947
108
+ 2025-09-26 01:16:50,573 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 87/100 | Train Loss: 0.0104 | Val mean-roc_auc_score: 0.8938
109
+ 2025-09-26 01:17:05,729 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 88/100 | Train Loss: 0.0061 | Val mean-roc_auc_score: 0.8942
110
+ 2025-09-26 01:17:22,723 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 89/100 | Train Loss: 0.0044 | Val mean-roc_auc_score: 0.8940
111
+ 2025-09-26 01:17:34,479 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 90/100 | Train Loss: 0.0064 | Val mean-roc_auc_score: 0.8949
112
+ 2025-09-26 01:17:48,650 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 91/100 | Train Loss: 0.0047 | Val mean-roc_auc_score: 0.8964
113
+ 2025-09-26 01:18:01,944 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 92/100 | Train Loss: 0.0050 | Val mean-roc_auc_score: 0.8958
114
+ 2025-09-26 01:18:16,015 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 93/100 | Train Loss: 0.0061 | Val mean-roc_auc_score: 0.8961
115
+ 2025-09-26 01:18:30,333 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 94/100 | Train Loss: 0.0072 | Val mean-roc_auc_score: 0.8952
116
+ 2025-09-26 01:18:41,834 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 95/100 | Train Loss: 0.0064 | Val mean-roc_auc_score: 0.8953
117
+ 2025-09-26 01:18:56,509 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 96/100 | Train Loss: 0.0117 | Val mean-roc_auc_score: 0.8949
118
+ 2025-09-26 01:19:08,604 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 97/100 | Train Loss: 0.0056 | Val mean-roc_auc_score: 0.8949
119
+ 2025-09-26 01:19:22,468 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 98/100 | Train Loss: 0.0069 | Val mean-roc_auc_score: 0.8952
120
+ 2025-09-26 01:19:36,439 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 99/100 | Train Loss: 0.0064 | Val mean-roc_auc_score: 0.8966
121
+ 2025-09-26 01:19:51,117 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 100/100 | Train Loss: 0.0062 | Val mean-roc_auc_score: 0.8966
122
+ 2025-09-26 01:19:51,897 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Test mean-roc_auc_score: 0.8677
123
+ 2025-09-26 01:19:52,209 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Starting triplicate run 2 for dataset antimalarial at 2025-09-26_01-19-52
124
+ 2025-09-26 01:20:02,442 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 1/100 | Train Loss: 0.5531 | Val mean-roc_auc_score: 0.7647
125
+ 2025-09-26 01:20:02,442 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Global step of best model: 120
126
+ 2025-09-26 01:20:03,145 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Best model saved at epoch 1 with val mean-roc_auc_score: 0.7647
127
+ 2025-09-26 01:20:11,715 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 2/100 | Train Loss: 0.5031 | Val mean-roc_auc_score: 0.8216
128
+ 2025-09-26 01:20:11,964 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Global step of best model: 240
129
+ 2025-09-26 01:20:12,603 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Best model saved at epoch 2 with val mean-roc_auc_score: 0.8216
130
+ 2025-09-26 01:20:26,670 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 3/100 | Train Loss: 0.4167 | Val mean-roc_auc_score: 0.8692
131
+ 2025-09-26 01:20:26,874 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Global step of best model: 360
132
+ 2025-09-26 01:20:27,596 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Best model saved at epoch 3 with val mean-roc_auc_score: 0.8692
133
+ 2025-09-26 01:20:41,210 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 4/100 | Train Loss: 0.3625 | Val mean-roc_auc_score: 0.8743
134
+ 2025-09-26 01:20:41,416 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Global step of best model: 480
135
+ 2025-09-26 01:20:42,116 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Best model saved at epoch 4 with val mean-roc_auc_score: 0.8743
136
+ 2025-09-26 01:20:53,671 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 5/100 | Train Loss: 0.2812 | Val mean-roc_auc_score: 0.8879
137
+ 2025-09-26 01:20:53,878 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Global step of best model: 600
138
+ 2025-09-26 01:20:54,431 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Best model saved at epoch 5 with val mean-roc_auc_score: 0.8879
139
+ 2025-09-26 01:21:07,886 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 6/100 | Train Loss: 0.2687 | Val mean-roc_auc_score: 0.8863
140
+ 2025-09-26 01:21:20,410 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 7/100 | Train Loss: 0.2031 | Val mean-roc_auc_score: 0.9004
141
+ 2025-09-26 01:21:20,611 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Global step of best model: 840
142
+ 2025-09-26 01:21:21,207 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Best model saved at epoch 7 with val mean-roc_auc_score: 0.9004
143
+ 2025-09-26 01:21:34,966 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 8/100 | Train Loss: 0.1927 | Val mean-roc_auc_score: 0.9012
144
+ 2025-09-26 01:21:35,174 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Global step of best model: 960
145
+ 2025-09-26 01:21:35,848 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Best model saved at epoch 8 with val mean-roc_auc_score: 0.9012
146
+ 2025-09-26 01:21:47,965 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 9/100 | Train Loss: 0.1508 | Val mean-roc_auc_score: 0.9052
147
+ 2025-09-26 01:21:48,166 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Global step of best model: 1080
148
+ 2025-09-26 01:21:48,791 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Best model saved at epoch 9 with val mean-roc_auc_score: 0.9052
149
+ 2025-09-26 01:22:02,470 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 10/100 | Train Loss: 0.1044 | Val mean-roc_auc_score: 0.9032
150
+ 2025-09-26 01:22:16,001 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 11/100 | Train Loss: 0.1359 | Val mean-roc_auc_score: 0.9007
151
+ 2025-09-26 01:22:27,242 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 12/100 | Train Loss: 0.0867 | Val mean-roc_auc_score: 0.8966
152
+ 2025-09-26 01:22:40,738 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 13/100 | Train Loss: 0.0813 | Val mean-roc_auc_score: 0.9039
153
+ 2025-09-26 01:22:52,329 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 14/100 | Train Loss: 0.0980 | Val mean-roc_auc_score: 0.9047
154
+ 2025-09-26 01:23:06,319 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 15/100 | Train Loss: 0.0547 | Val mean-roc_auc_score: 0.8991
155
+ 2025-09-26 01:23:18,133 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 16/100 | Train Loss: 0.0605 | Val mean-roc_auc_score: 0.9041
156
+ 2025-09-26 01:23:33,287 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 17/100 | Train Loss: 0.0555 | Val mean-roc_auc_score: 0.9023
157
+ 2025-09-26 01:23:47,625 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 18/100 | Train Loss: 0.0440 | Val mean-roc_auc_score: 0.8980
158
+ 2025-09-26 01:23:58,911 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 19/100 | Train Loss: 0.0539 | Val mean-roc_auc_score: 0.8997
159
+ 2025-09-26 01:24:12,563 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 20/100 | Train Loss: 0.0453 | Val mean-roc_auc_score: 0.9044
160
+ 2025-09-26 01:24:24,338 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 21/100 | Train Loss: 0.0512 | Val mean-roc_auc_score: 0.9042
161
+ 2025-09-26 01:24:38,775 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 22/100 | Train Loss: 0.0555 | Val mean-roc_auc_score: 0.8990
162
+ 2025-09-26 01:24:49,949 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 23/100 | Train Loss: 0.0259 | Val mean-roc_auc_score: 0.8980
163
+ 2025-09-26 01:25:04,536 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 24/100 | Train Loss: 0.0293 | Val mean-roc_auc_score: 0.8993
164
+ 2025-09-26 01:25:19,711 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 25/100 | Train Loss: 0.0256 | Val mean-roc_auc_score: 0.8993
165
+ 2025-09-26 01:25:30,430 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 26/100 | Train Loss: 0.0186 | Val mean-roc_auc_score: 0.8970
166
+ 2025-09-26 01:25:44,607 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 27/100 | Train Loss: 0.0285 | Val mean-roc_auc_score: 0.9000
167
+ 2025-09-26 01:25:58,365 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 28/100 | Train Loss: 0.0365 | Val mean-roc_auc_score: 0.8977
168
+ 2025-09-26 01:26:10,267 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 29/100 | Train Loss: 0.0221 | Val mean-roc_auc_score: 0.8981
169
+ 2025-09-26 01:26:24,733 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 30/100 | Train Loss: 0.0177 | Val mean-roc_auc_score: 0.8958
170
+ 2025-09-26 01:26:35,932 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 31/100 | Train Loss: 0.0138 | Val mean-roc_auc_score: 0.8983
171
+ 2025-09-26 01:26:51,093 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 32/100 | Train Loss: 0.0174 | Val mean-roc_auc_score: 0.9009
172
+ 2025-09-26 01:27:05,158 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 33/100 | Train Loss: 0.0141 | Val mean-roc_auc_score: 0.8972
173
+ 2025-09-26 01:27:18,052 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 34/100 | Train Loss: 0.0110 | Val mean-roc_auc_score: 0.8986
174
+ 2025-09-26 01:27:31,867 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 35/100 | Train Loss: 0.0187 | Val mean-roc_auc_score: 0.8968
175
+ 2025-09-26 01:27:44,279 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 36/100 | Train Loss: 0.0134 | Val mean-roc_auc_score: 0.8997
176
+ 2025-09-26 01:27:58,732 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 37/100 | Train Loss: 0.0120 | Val mean-roc_auc_score: 0.9011
177
+ 2025-09-26 01:28:12,359 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 38/100 | Train Loss: 0.0104 | Val mean-roc_auc_score: 0.8989
178
+ 2025-09-26 01:28:24,102 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 39/100 | Train Loss: 0.0108 | Val mean-roc_auc_score: 0.9009
179
+ 2025-09-26 01:28:37,771 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 40/100 | Train Loss: 0.0187 | Val mean-roc_auc_score: 0.8991
180
+ 2025-09-26 01:28:49,127 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 41/100 | Train Loss: 0.0168 | Val mean-roc_auc_score: 0.9029
181
+ 2025-09-26 01:29:04,216 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 42/100 | Train Loss: 0.0115 | Val mean-roc_auc_score: 0.9004
182
+ 2025-09-26 01:29:15,448 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 43/100 | Train Loss: 0.0099 | Val mean-roc_auc_score: 0.8983
183
+ 2025-09-26 01:29:28,884 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 44/100 | Train Loss: 0.0132 | Val mean-roc_auc_score: 0.8979
184
+ 2025-09-26 01:29:42,704 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 45/100 | Train Loss: 0.0154 | Val mean-roc_auc_score: 0.8994
185
+ 2025-09-26 01:29:54,155 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 46/100 | Train Loss: 0.0097 | Val mean-roc_auc_score: 0.9015
186
+ 2025-09-26 01:30:08,085 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 47/100 | Train Loss: 0.0087 | Val mean-roc_auc_score: 0.9002
187
+ 2025-09-26 01:30:19,108 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 48/100 | Train Loss: 0.0242 | Val mean-roc_auc_score: 0.8976
188
+ 2025-09-26 01:30:32,948 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 49/100 | Train Loss: 0.0204 | Val mean-roc_auc_score: 0.8940
189
+ 2025-09-26 01:30:44,828 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 50/100 | Train Loss: 0.0106 | Val mean-roc_auc_score: 0.8961
190
+ 2025-09-26 01:30:59,342 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 51/100 | Train Loss: 0.0203 | Val mean-roc_auc_score: 0.8987
191
+ 2025-09-26 01:31:11,333 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 52/100 | Train Loss: 0.0129 | Val mean-roc_auc_score: 0.8927
192
+ 2025-09-26 01:31:24,916 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 53/100 | Train Loss: 0.0096 | Val mean-roc_auc_score: 0.8976
193
+ 2025-09-26 01:31:38,570 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 54/100 | Train Loss: 0.0118 | Val mean-roc_auc_score: 0.8985
194
+ 2025-09-26 01:31:49,900 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 55/100 | Train Loss: 0.0099 | Val mean-roc_auc_score: 0.8987
195
+ 2025-09-26 01:32:03,515 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 56/100 | Train Loss: 0.0126 | Val mean-roc_auc_score: 0.8977
196
+ 2025-09-26 01:32:14,916 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 57/100 | Train Loss: 0.0072 | Val mean-roc_auc_score: 0.8976
197
+ 2025-09-26 01:32:28,269 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 58/100 | Train Loss: 0.0100 | Val mean-roc_auc_score: 0.8974
198
+ 2025-09-26 01:32:41,120 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 59/100 | Train Loss: 0.0075 | Val mean-roc_auc_score: 0.8983
199
+ 2025-09-26 01:32:55,715 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 60/100 | Train Loss: 0.0075 | Val mean-roc_auc_score: 0.8988
200
+ 2025-09-26 01:33:10,324 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 61/100 | Train Loss: 0.0037 | Val mean-roc_auc_score: 0.8967
201
+ 2025-09-26 01:33:23,475 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 62/100 | Train Loss: 0.0107 | Val mean-roc_auc_score: 0.8964
202
+ 2025-09-26 01:33:38,071 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 63/100 | Train Loss: 0.0097 | Val mean-roc_auc_score: 0.8982
203
+ 2025-09-26 01:33:50,836 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 64/100 | Train Loss: 0.0101 | Val mean-roc_auc_score: 0.8978
204
+ 2025-09-26 01:34:05,701 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 65/100 | Train Loss: 0.0091 | Val mean-roc_auc_score: 0.8992
205
+ 2025-09-26 01:34:17,771 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 66/100 | Train Loss: 0.0039 | Val mean-roc_auc_score: 0.8990
206
+ 2025-09-26 01:34:34,044 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 67/100 | Train Loss: 0.0047 | Val mean-roc_auc_score: 0.8988
207
+ 2025-09-26 01:34:45,987 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 68/100 | Train Loss: 0.0054 | Val mean-roc_auc_score: 0.8978
208
+ 2025-09-26 01:35:00,773 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 69/100 | Train Loss: 0.0138 | Val mean-roc_auc_score: 0.8987
209
+ 2025-09-26 01:35:13,160 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 70/100 | Train Loss: 0.0070 | Val mean-roc_auc_score: 0.9001
210
+ 2025-09-26 01:35:28,471 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 71/100 | Train Loss: 0.0043 | Val mean-roc_auc_score: 0.9010
211
+ 2025-09-26 01:35:40,936 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 72/100 | Train Loss: 0.0069 | Val mean-roc_auc_score: 0.8972
212
+ 2025-09-26 01:35:56,356 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 73/100 | Train Loss: 0.0064 | Val mean-roc_auc_score: 0.8983
213
+ 2025-09-26 01:36:09,163 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 74/100 | Train Loss: 0.0058 | Val mean-roc_auc_score: 0.8986
214
+ 2025-09-26 01:36:25,987 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 75/100 | Train Loss: 0.0059 | Val mean-roc_auc_score: 0.8984
215
+ 2025-09-26 01:36:40,868 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 76/100 | Train Loss: 0.0056 | Val mean-roc_auc_score: 0.8979
216
+ 2025-09-26 01:36:53,909 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 77/100 | Train Loss: 0.0094 | Val mean-roc_auc_score: 0.8978
217
+ 2025-09-26 01:37:08,330 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 78/100 | Train Loss: 0.0056 | Val mean-roc_auc_score: 0.8980
218
+ 2025-09-26 01:37:20,003 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 79/100 | Train Loss: 0.0062 | Val mean-roc_auc_score: 0.8978
219
+ 2025-09-26 01:37:34,091 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 80/100 | Train Loss: 0.0109 | Val mean-roc_auc_score: 0.8976
220
+ 2025-09-26 01:37:46,180 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 81/100 | Train Loss: 0.0020 | Val mean-roc_auc_score: 0.8985
221
+ 2025-09-26 01:38:01,271 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 82/100 | Train Loss: 0.0048 | Val mean-roc_auc_score: 0.8985
222
+ 2025-09-26 01:38:13,830 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 83/100 | Train Loss: 0.0069 | Val mean-roc_auc_score: 0.8990
223
+ 2025-09-26 01:38:31,024 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 84/100 | Train Loss: 0.0047 | Val mean-roc_auc_score: 0.8983
224
+ 2025-09-26 01:38:43,648 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 85/100 | Train Loss: 0.0043 | Val mean-roc_auc_score: 0.8986
225
+ 2025-09-26 01:38:57,577 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 86/100 | Train Loss: 0.0037 | Val mean-roc_auc_score: 0.8982
226
+ 2025-09-26 01:39:11,015 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 87/100 | Train Loss: 0.0049 | Val mean-roc_auc_score: 0.8984
227
+ 2025-09-26 01:39:26,115 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 88/100 | Train Loss: 0.0052 | Val mean-roc_auc_score: 0.8986
228
+ 2025-09-26 01:39:39,003 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 89/100 | Train Loss: 0.0065 | Val mean-roc_auc_score: 0.8977
229
+ 2025-09-26 01:39:54,969 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 90/100 | Train Loss: 0.0064 | Val mean-roc_auc_score: 0.8978
230
+ 2025-09-26 01:40:09,269 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 91/100 | Train Loss: 0.0059 | Val mean-roc_auc_score: 0.8973
231
+ 2025-09-26 01:40:26,359 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 92/100 | Train Loss: 0.0034 | Val mean-roc_auc_score: 0.8954
232
+ 2025-09-26 01:40:39,433 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 93/100 | Train Loss: 0.0031 | Val mean-roc_auc_score: 0.8959
233
+ 2025-09-26 01:40:55,377 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 94/100 | Train Loss: 0.0061 | Val mean-roc_auc_score: 0.8956
234
+ 2025-09-26 01:41:08,407 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 95/100 | Train Loss: 0.0069 | Val mean-roc_auc_score: 0.8976
235
+ 2025-09-26 01:41:24,518 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 96/100 | Train Loss: 0.0087 | Val mean-roc_auc_score: 0.8971
236
+ 2025-09-26 01:41:38,397 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 97/100 | Train Loss: 0.0041 | Val mean-roc_auc_score: 0.8976
237
+ 2025-09-26 01:41:54,147 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 98/100 | Train Loss: 0.0049 | Val mean-roc_auc_score: 0.8938
238
+ 2025-09-26 01:42:06,959 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 99/100 | Train Loss: 0.0045 | Val mean-roc_auc_score: 0.8955
239
+ 2025-09-26 01:42:22,404 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 100/100 | Train Loss: 0.0053 | Val mean-roc_auc_score: 0.8961
240
+ 2025-09-26 01:42:23,314 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Test mean-roc_auc_score: 0.8802
241
+ 2025-09-26 01:42:23,674 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Starting triplicate run 3 for dataset antimalarial at 2025-09-26_01-42-23
242
+ 2025-09-26 01:42:35,827 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 1/100 | Train Loss: 0.5281 | Val mean-roc_auc_score: 0.7641
243
+ 2025-09-26 01:42:35,827 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Global step of best model: 120
244
+ 2025-09-26 01:42:36,621 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Best model saved at epoch 1 with val mean-roc_auc_score: 0.7641
245
+ 2025-09-26 01:42:48,921 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 2/100 | Train Loss: 0.4656 | Val mean-roc_auc_score: 0.8293
246
+ 2025-09-26 01:42:49,116 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Global step of best model: 240
247
+ 2025-09-26 01:42:49,854 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Best model saved at epoch 2 with val mean-roc_auc_score: 0.8293
248
+ 2025-09-26 01:43:05,282 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 3/100 | Train Loss: 0.4000 | Val mean-roc_auc_score: 0.8511
249
+ 2025-09-26 01:43:05,500 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Global step of best model: 360
250
+ 2025-09-26 01:43:06,214 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Best model saved at epoch 3 with val mean-roc_auc_score: 0.8511
251
+ 2025-09-26 01:43:19,628 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 4/100 | Train Loss: 0.3516 | Val mean-roc_auc_score: 0.8682
252
+ 2025-09-26 01:43:19,843 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Global step of best model: 480
253
+ 2025-09-26 01:43:20,614 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Best model saved at epoch 4 with val mean-roc_auc_score: 0.8682
254
+ 2025-09-26 01:43:35,807 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 5/100 | Train Loss: 0.2913 | Val mean-roc_auc_score: 0.8858
255
+ 2025-09-26 01:43:36,017 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Global step of best model: 600
256
+ 2025-09-26 01:43:36,772 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Best model saved at epoch 5 with val mean-roc_auc_score: 0.8858
257
+ 2025-09-26 01:43:49,035 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 6/100 | Train Loss: 0.2266 | Val mean-roc_auc_score: 0.8856
258
+ 2025-09-26 01:44:03,858 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 7/100 | Train Loss: 0.1547 | Val mean-roc_auc_score: 0.8866
259
+ 2025-09-26 01:44:04,057 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Global step of best model: 840
260
+ 2025-09-26 01:44:04,661 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Best model saved at epoch 7 with val mean-roc_auc_score: 0.8866
261
+ 2025-09-26 01:44:16,280 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 8/100 | Train Loss: 0.1906 | Val mean-roc_auc_score: 0.8873
262
+ 2025-09-26 01:44:16,491 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Global step of best model: 960
263
+ 2025-09-26 01:44:17,142 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Best model saved at epoch 8 with val mean-roc_auc_score: 0.8873
264
+ 2025-09-26 01:44:33,790 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 9/100 | Train Loss: 0.1344 | Val mean-roc_auc_score: 0.8800
265
+ 2025-09-26 01:44:46,068 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 10/100 | Train Loss: 0.1138 | Val mean-roc_auc_score: 0.8866
266
+ 2025-09-26 01:45:00,911 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 11/100 | Train Loss: 0.1703 | Val mean-roc_auc_score: 0.8805
267
+ 2025-09-26 01:45:14,056 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 12/100 | Train Loss: 0.0930 | Val mean-roc_auc_score: 0.8876
268
+ 2025-09-26 01:45:14,225 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Global step of best model: 1440
269
+ 2025-09-26 01:45:14,909 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Best model saved at epoch 12 with val mean-roc_auc_score: 0.8876
270
+ 2025-09-26 01:45:30,230 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 13/100 | Train Loss: 0.0714 | Val mean-roc_auc_score: 0.8943
271
+ 2025-09-26 01:45:30,445 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Global step of best model: 1560
272
+ 2025-09-26 01:45:31,092 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Best model saved at epoch 13 with val mean-roc_auc_score: 0.8943
273
+ 2025-09-26 01:45:44,023 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 14/100 | Train Loss: 0.0797 | Val mean-roc_auc_score: 0.8868
274
+ 2025-09-26 01:45:59,018 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 15/100 | Train Loss: 0.0681 | Val mean-roc_auc_score: 0.8886
275
+ 2025-09-26 01:46:12,195 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 16/100 | Train Loss: 0.0559 | Val mean-roc_auc_score: 0.8921
276
+ 2025-09-26 01:46:28,708 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 17/100 | Train Loss: 0.0551 | Val mean-roc_auc_score: 0.8891
277
+ 2025-09-26 01:46:41,010 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 18/100 | Train Loss: 0.0760 | Val mean-roc_auc_score: 0.8874
278
+ 2025-09-26 01:46:55,745 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 19/100 | Train Loss: 0.0352 | Val mean-roc_auc_score: 0.8902
279
+ 2025-09-26 01:47:08,514 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 20/100 | Train Loss: 0.0261 | Val mean-roc_auc_score: 0.8854
280
+ 2025-09-26 01:47:23,160 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 21/100 | Train Loss: 0.0311 | Val mean-roc_auc_score: 0.8863
281
+ 2025-09-26 01:47:37,006 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 22/100 | Train Loss: 0.0385 | Val mean-roc_auc_score: 0.8827
282
+ 2025-09-26 01:47:52,539 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 23/100 | Train Loss: 0.0201 | Val mean-roc_auc_score: 0.8840
283
+ 2025-09-26 01:48:05,609 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 24/100 | Train Loss: 0.0293 | Val mean-roc_auc_score: 0.8871
284
+ 2025-09-26 01:48:22,800 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 25/100 | Train Loss: 0.0522 | Val mean-roc_auc_score: 0.8879
285
+ 2025-09-26 01:48:36,441 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 26/100 | Train Loss: 0.0504 | Val mean-roc_auc_score: 0.8740
286
+ 2025-09-26 01:48:52,677 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 27/100 | Train Loss: 0.0363 | Val mean-roc_auc_score: 0.8755
287
+ 2025-09-26 01:49:06,573 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 28/100 | Train Loss: 0.0247 | Val mean-roc_auc_score: 0.8755
288
+ 2025-09-26 01:49:22,843 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 29/100 | Train Loss: 0.0260 | Val mean-roc_auc_score: 0.8820
289
+ 2025-09-26 01:49:37,987 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 30/100 | Train Loss: 0.0169 | Val mean-roc_auc_score: 0.8854
290
+ 2025-09-26 01:49:53,674 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 31/100 | Train Loss: 0.0176 | Val mean-roc_auc_score: 0.8809
291
+ 2025-09-26 01:50:08,029 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 32/100 | Train Loss: 0.0287 | Val mean-roc_auc_score: 0.8771
292
+ 2025-09-26 01:50:23,621 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 33/100 | Train Loss: 0.0191 | Val mean-roc_auc_score: 0.8805
293
+ 2025-09-26 01:50:40,351 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 34/100 | Train Loss: 0.0301 | Val mean-roc_auc_score: 0.8822
294
+ 2025-09-26 01:50:55,829 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 35/100 | Train Loss: 0.0190 | Val mean-roc_auc_score: 0.8865
295
+ 2025-09-26 01:51:08,857 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 36/100 | Train Loss: 0.0143 | Val mean-roc_auc_score: 0.8863
296
+ 2025-09-26 01:51:25,198 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 37/100 | Train Loss: 0.0194 | Val mean-roc_auc_score: 0.8864
297
+ 2025-09-26 01:51:38,421 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 38/100 | Train Loss: 0.0122 | Val mean-roc_auc_score: 0.8849
298
+ 2025-09-26 01:51:53,688 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 39/100 | Train Loss: 0.0126 | Val mean-roc_auc_score: 0.8834
299
+ 2025-09-26 01:52:06,824 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 40/100 | Train Loss: 0.0147 | Val mean-roc_auc_score: 0.8781
300
+ 2025-09-26 01:52:22,793 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 41/100 | Train Loss: 0.0217 | Val mean-roc_auc_score: 0.8798
301
+ 2025-09-26 01:52:37,948 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 42/100 | Train Loss: 0.0235 | Val mean-roc_auc_score: 0.8822
302
+ 2025-09-26 01:52:53,141 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 43/100 | Train Loss: 0.0110 | Val mean-roc_auc_score: 0.8807
303
+ 2025-09-26 01:53:07,268 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 44/100 | Train Loss: 0.0140 | Val mean-roc_auc_score: 0.8816
304
+ 2025-09-26 01:53:23,353 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 45/100 | Train Loss: 0.0083 | Val mean-roc_auc_score: 0.8799
305
+ 2025-09-26 01:53:37,426 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 46/100 | Train Loss: 0.0121 | Val mean-roc_auc_score: 0.8809
306
+ 2025-09-26 01:53:53,674 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 47/100 | Train Loss: 0.0095 | Val mean-roc_auc_score: 0.8792
307
+ 2025-09-26 01:54:06,082 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 48/100 | Train Loss: 0.0111 | Val mean-roc_auc_score: 0.8800
308
+ 2025-09-26 01:54:21,699 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 49/100 | Train Loss: 0.0190 | Val mean-roc_auc_score: 0.8778
309
+ 2025-09-26 01:54:35,829 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 50/100 | Train Loss: 0.0122 | Val mean-roc_auc_score: 0.8804
310
+ 2025-09-26 01:54:50,964 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 51/100 | Train Loss: 0.0084 | Val mean-roc_auc_score: 0.8813
311
+ 2025-09-26 01:55:03,833 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 52/100 | Train Loss: 0.0119 | Val mean-roc_auc_score: 0.8820
312
+ 2025-09-26 01:55:19,815 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 53/100 | Train Loss: 0.0142 | Val mean-roc_auc_score: 0.8796
313
+ 2025-09-26 01:55:31,957 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 54/100 | Train Loss: 0.0119 | Val mean-roc_auc_score: 0.8804
314
+ 2025-09-26 01:55:46,768 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 55/100 | Train Loss: 0.0126 | Val mean-roc_auc_score: 0.8822
315
+ 2025-09-26 01:56:00,191 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 56/100 | Train Loss: 0.0087 | Val mean-roc_auc_score: 0.8817
316
+ 2025-09-26 01:56:15,690 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 57/100 | Train Loss: 0.0099 | Val mean-roc_auc_score: 0.8823
317
+ 2025-09-26 01:56:30,056 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 58/100 | Train Loss: 0.0132 | Val mean-roc_auc_score: 0.8768
318
+ 2025-09-26 01:56:43,673 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 59/100 | Train Loss: 0.0090 | Val mean-roc_auc_score: 0.8780
319
+ 2025-09-26 01:56:57,859 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 60/100 | Train Loss: 0.0123 | Val mean-roc_auc_score: 0.8818
320
+ 2025-09-26 01:57:09,980 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 61/100 | Train Loss: 0.0127 | Val mean-roc_auc_score: 0.8743
321
+ 2025-09-26 01:57:25,120 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 62/100 | Train Loss: 0.0045 | Val mean-roc_auc_score: 0.8754
322
+ 2025-09-26 01:57:37,297 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 63/100 | Train Loss: 0.0137 | Val mean-roc_auc_score: 0.8842
323
+ 2025-09-26 01:57:51,716 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 64/100 | Train Loss: 0.0078 | Val mean-roc_auc_score: 0.8801
324
+ 2025-09-26 01:58:03,715 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 65/100 | Train Loss: 0.0084 | Val mean-roc_auc_score: 0.8784
325
+ 2025-09-26 01:58:18,451 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 66/100 | Train Loss: 0.0095 | Val mean-roc_auc_score: 0.8829
326
+ 2025-09-26 01:58:32,744 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 67/100 | Train Loss: 0.0067 | Val mean-roc_auc_score: 0.8822
327
+ 2025-09-26 01:58:47,002 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 68/100 | Train Loss: 0.0047 | Val mean-roc_auc_score: 0.8816
328
+ 2025-09-26 01:58:58,773 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 69/100 | Train Loss: 0.0097 | Val mean-roc_auc_score: 0.8798
329
+ 2025-09-26 01:59:14,993 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 70/100 | Train Loss: 0.0089 | Val mean-roc_auc_score: 0.8805
330
+ 2025-09-26 01:59:30,248 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 71/100 | Train Loss: 0.0056 | Val mean-roc_auc_score: 0.8827
331
+ 2025-09-26 01:59:43,845 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 72/100 | Train Loss: 0.0062 | Val mean-roc_auc_score: 0.8795
332
+ 2025-09-26 01:59:58,933 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 73/100 | Train Loss: 0.0090 | Val mean-roc_auc_score: 0.8811
333
+ 2025-09-26 02:00:12,450 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 74/100 | Train Loss: 0.0089 | Val mean-roc_auc_score: 0.8810
334
+ 2025-09-26 02:00:29,733 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 75/100 | Train Loss: 0.0065 | Val mean-roc_auc_score: 0.8790
335
+ 2025-09-26 02:00:42,565 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 76/100 | Train Loss: 0.0089 | Val mean-roc_auc_score: 0.8806
336
+ 2025-09-26 02:00:57,605 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 77/100 | Train Loss: 0.0092 | Val mean-roc_auc_score: 0.8759
337
+ 2025-09-26 02:01:09,899 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 78/100 | Train Loss: 0.0063 | Val mean-roc_auc_score: 0.8758
338
+ 2025-09-26 02:01:25,027 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 79/100 | Train Loss: 0.0070 | Val mean-roc_auc_score: 0.8771
339
+ 2025-09-26 02:01:38,110 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 80/100 | Train Loss: 0.0065 | Val mean-roc_auc_score: 0.8773
340
+ 2025-09-26 02:01:53,712 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 81/100 | Train Loss: 0.0048 | Val mean-roc_auc_score: 0.8758
341
+ 2025-09-26 02:02:07,699 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 82/100 | Train Loss: 0.0047 | Val mean-roc_auc_score: 0.8759
342
+ 2025-09-26 02:02:22,806 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 83/100 | Train Loss: 0.0069 | Val mean-roc_auc_score: 0.8776
343
+ 2025-09-26 02:02:36,480 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 84/100 | Train Loss: 0.0083 | Val mean-roc_auc_score: 0.8774
344
+ 2025-09-26 02:02:51,296 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 85/100 | Train Loss: 0.0054 | Val mean-roc_auc_score: 0.8769
345
+ 2025-09-26 02:03:04,503 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 86/100 | Train Loss: 0.0123 | Val mean-roc_auc_score: 0.8802
346
+ 2025-09-26 02:03:20,011 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 87/100 | Train Loss: 0.0060 | Val mean-roc_auc_score: 0.8773
347
+ 2025-09-26 02:03:32,391 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 88/100 | Train Loss: 0.0070 | Val mean-roc_auc_score: 0.8767
348
+ 2025-09-26 02:03:47,589 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 89/100 | Train Loss: 0.0066 | Val mean-roc_auc_score: 0.8757
349
+ 2025-09-26 02:04:00,589 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 90/100 | Train Loss: 0.0099 | Val mean-roc_auc_score: 0.8741
350
+ 2025-09-26 02:04:16,192 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 91/100 | Train Loss: 0.0144 | Val mean-roc_auc_score: 0.8744
351
+ 2025-09-26 02:04:30,786 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 92/100 | Train Loss: 0.0063 | Val mean-roc_auc_score: 0.8747
352
+ 2025-09-26 02:04:45,748 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 93/100 | Train Loss: 0.0074 | Val mean-roc_auc_score: 0.8767
353
+ 2025-09-26 02:04:57,452 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 94/100 | Train Loss: 0.0063 | Val mean-roc_auc_score: 0.8797
354
+ 2025-09-26 02:05:13,216 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 95/100 | Train Loss: 0.0056 | Val mean-roc_auc_score: 0.8786
355
+ 2025-09-26 02:05:27,134 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 96/100 | Train Loss: 0.0026 | Val mean-roc_auc_score: 0.8778
356
+ 2025-09-26 02:05:39,457 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 97/100 | Train Loss: 0.0042 | Val mean-roc_auc_score: 0.8769
357
+ 2025-09-26 02:05:53,795 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 98/100 | Train Loss: 0.0081 | Val mean-roc_auc_score: 0.8767
358
+ 2025-09-26 02:06:07,207 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 99/100 | Train Loss: 0.0081 | Val mean-roc_auc_score: 0.8765
359
+ 2025-09-26 02:06:23,493 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 100/100 | Train Loss: 0.0040 | Val mean-roc_auc_score: 0.8773
360
+ 2025-09-26 02:06:24,439 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Test mean-roc_auc_score: 0.8789
361
+ 2025-09-26 02:06:24,814 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Final Triplicate Test Results — Avg mean-roc_auc_score: 0.8756, Std Dev: 0.0056
logs_modchembert_classification_ModChemBERT-MLM-DAPT/modchembert_deepchem_splits_run_cocrystal_epochs100_batch_size32_20250926_032557.log ADDED
@@ -0,0 +1,351 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ 2025-09-26 03:25:57,284 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Running benchmark for dataset: cocrystal
2
+ 2025-09-26 03:25:57,284 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - dataset: cocrystal, tasks: ['label'], epochs: 100, learning rate: 3e-05
3
+ 2025-09-26 03:25:57,290 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Starting triplicate run 1 for dataset cocrystal at 2025-09-26_03-25-57
4
+ 2025-09-26 03:26:03,705 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 1/100 | Train Loss: 0.7027 | Val mean-roc_auc_score: 0.7042
5
+ 2025-09-26 03:26:03,705 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Global step of best model: 37
6
+ 2025-09-26 03:26:04,651 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Best model saved at epoch 1 with val mean-roc_auc_score: 0.7042
7
+ 2025-09-26 03:26:08,855 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 2/100 | Train Loss: 0.4932 | Val mean-roc_auc_score: 0.7918
8
+ 2025-09-26 03:26:09,062 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Global step of best model: 74
9
+ 2025-09-26 03:26:09,724 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Best model saved at epoch 2 with val mean-roc_auc_score: 0.7918
10
+ 2025-09-26 03:26:14,506 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 3/100 | Train Loss: 0.4489 | Val mean-roc_auc_score: 0.8279
11
+ 2025-09-26 03:26:14,699 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Global step of best model: 111
12
+ 2025-09-26 03:26:15,427 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Best model saved at epoch 3 with val mean-roc_auc_score: 0.8279
13
+ 2025-09-26 03:26:20,116 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 4/100 | Train Loss: 0.4155 | Val mean-roc_auc_score: 0.7926
14
+ 2025-09-26 03:26:22,453 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 5/100 | Train Loss: 0.3919 | Val mean-roc_auc_score: 0.8062
15
+ 2025-09-26 03:26:27,723 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 6/100 | Train Loss: 0.3594 | Val mean-roc_auc_score: 0.8350
16
+ 2025-09-26 03:26:28,339 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Global step of best model: 222
17
+ 2025-09-26 03:26:29,214 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Best model saved at epoch 6 with val mean-roc_auc_score: 0.8350
18
+ 2025-09-26 03:26:35,058 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 7/100 | Train Loss: 0.3243 | Val mean-roc_auc_score: 0.8404
19
+ 2025-09-26 03:26:35,296 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Global step of best model: 259
20
+ 2025-09-26 03:26:35,965 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Best model saved at epoch 7 with val mean-roc_auc_score: 0.8404
21
+ 2025-09-26 03:26:42,050 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 8/100 | Train Loss: 0.3024 | Val mean-roc_auc_score: 0.8460
22
+ 2025-09-26 03:26:42,255 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Global step of best model: 296
23
+ 2025-09-26 03:26:42,905 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Best model saved at epoch 8 with val mean-roc_auc_score: 0.8460
24
+ 2025-09-26 03:26:48,879 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 9/100 | Train Loss: 0.3428 | Val mean-roc_auc_score: 0.8486
25
+ 2025-09-26 03:26:49,101 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Global step of best model: 333
26
+ 2025-09-26 03:26:49,713 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Best model saved at epoch 9 with val mean-roc_auc_score: 0.8486
27
+ 2025-09-26 03:26:53,316 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 10/100 | Train Loss: 0.2669 | Val mean-roc_auc_score: 0.8538
28
+ 2025-09-26 03:26:53,588 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Global step of best model: 370
29
+ 2025-09-26 03:26:54,290 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Best model saved at epoch 10 with val mean-roc_auc_score: 0.8538
30
+ 2025-09-26 03:26:59,851 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 11/100 | Train Loss: 0.2790 | Val mean-roc_auc_score: 0.8712
31
+ 2025-09-26 03:27:00,410 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Global step of best model: 407
32
+ 2025-09-26 03:27:01,052 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Best model saved at epoch 11 with val mean-roc_auc_score: 0.8712
33
+ 2025-09-26 03:27:07,162 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 12/100 | Train Loss: 0.2382 | Val mean-roc_auc_score: 0.8722
34
+ 2025-09-26 03:27:07,479 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Global step of best model: 444
35
+ 2025-09-26 03:27:08,155 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Best model saved at epoch 12 with val mean-roc_auc_score: 0.8722
36
+ 2025-09-26 03:27:14,666 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 13/100 | Train Loss: 0.2179 | Val mean-roc_auc_score: 0.8682
37
+ 2025-09-26 03:27:20,783 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 14/100 | Train Loss: 0.2760 | Val mean-roc_auc_score: 0.8421
38
+ 2025-09-26 03:27:24,916 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 15/100 | Train Loss: 0.1943 | Val mean-roc_auc_score: 0.8588
39
+ 2025-09-26 03:27:31,327 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 16/100 | Train Loss: 0.1892 | Val mean-roc_auc_score: 0.8309
40
+ 2025-09-26 03:27:38,586 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 17/100 | Train Loss: 0.1541 | Val mean-roc_auc_score: 0.8334
41
+ 2025-09-26 03:27:45,478 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 18/100 | Train Loss: 0.1622 | Val mean-roc_auc_score: 0.8026
42
+ 2025-09-26 03:27:49,757 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 19/100 | Train Loss: 0.1029 | Val mean-roc_auc_score: 0.8402
43
+ 2025-09-26 03:27:56,880 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 20/100 | Train Loss: 0.1334 | Val mean-roc_auc_score: 0.8134
44
+ 2025-09-26 03:28:03,819 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 21/100 | Train Loss: 0.0959 | Val mean-roc_auc_score: 0.8112
45
+ 2025-09-26 03:28:10,905 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 22/100 | Train Loss: 0.1596 | Val mean-roc_auc_score: 0.8228
46
+ 2025-09-26 03:28:17,336 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 23/100 | Train Loss: 0.0980 | Val mean-roc_auc_score: 0.8222
47
+ 2025-09-26 03:28:21,404 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 24/100 | Train Loss: 0.0722 | Val mean-roc_auc_score: 0.8233
48
+ 2025-09-26 03:28:27,870 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 25/100 | Train Loss: 0.1225 | Val mean-roc_auc_score: 0.8538
49
+ 2025-09-26 03:28:34,244 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 26/100 | Train Loss: 0.0756 | Val mean-roc_auc_score: 0.8158
50
+ 2025-09-26 03:28:42,089 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 27/100 | Train Loss: 0.1208 | Val mean-roc_auc_score: 0.7926
51
+ 2025-09-26 03:28:48,448 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 28/100 | Train Loss: 0.0877 | Val mean-roc_auc_score: 0.8185
52
+ 2025-09-26 03:28:51,909 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 29/100 | Train Loss: 0.0462 | Val mean-roc_auc_score: 0.8218
53
+ 2025-09-26 03:28:57,836 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 30/100 | Train Loss: 0.0348 | Val mean-roc_auc_score: 0.8186
54
+ 2025-09-26 03:29:04,626 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 31/100 | Train Loss: 0.0304 | Val mean-roc_auc_score: 0.8303
55
+ 2025-09-26 03:29:11,423 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 32/100 | Train Loss: 0.0226 | Val mean-roc_auc_score: 0.8335
56
+ 2025-09-26 03:29:17,921 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 33/100 | Train Loss: 0.0435 | Val mean-roc_auc_score: 0.8105
57
+ 2025-09-26 03:29:21,744 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 34/100 | Train Loss: 0.0777 | Val mean-roc_auc_score: 0.7824
58
+ 2025-09-26 03:29:28,035 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 35/100 | Train Loss: 0.1098 | Val mean-roc_auc_score: 0.8349
59
+ 2025-09-26 03:29:34,416 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 36/100 | Train Loss: 0.0530 | Val mean-roc_auc_score: 0.8301
60
+ 2025-09-26 03:29:41,215 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 37/100 | Train Loss: 0.0287 | Val mean-roc_auc_score: 0.8264
61
+ 2025-09-26 03:29:48,221 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 38/100 | Train Loss: 0.0465 | Val mean-roc_auc_score: 0.8245
62
+ 2025-09-26 03:29:51,282 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 39/100 | Train Loss: 0.0158 | Val mean-roc_auc_score: 0.8194
63
+ 2025-09-26 03:29:57,354 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 40/100 | Train Loss: 0.0154 | Val mean-roc_auc_score: 0.8167
64
+ 2025-09-26 03:30:03,145 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 41/100 | Train Loss: 0.0172 | Val mean-roc_auc_score: 0.8193
65
+ 2025-09-26 03:30:09,414 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 42/100 | Train Loss: 0.0118 | Val mean-roc_auc_score: 0.8219
66
+ 2025-09-26 03:30:15,162 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 43/100 | Train Loss: 0.0097 | Val mean-roc_auc_score: 0.8213
67
+ 2025-09-26 03:30:18,551 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 44/100 | Train Loss: 0.0060 | Val mean-roc_auc_score: 0.8215
68
+ 2025-09-26 03:30:24,802 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 45/100 | Train Loss: 0.0082 | Val mean-roc_auc_score: 0.8182
69
+ 2025-09-26 03:30:30,991 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 46/100 | Train Loss: 0.0033 | Val mean-roc_auc_score: 0.8031
70
+ 2025-09-26 03:30:37,417 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 47/100 | Train Loss: 0.0077 | Val mean-roc_auc_score: 0.8109
71
+ 2025-09-26 03:30:43,501 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 48/100 | Train Loss: 0.0083 | Val mean-roc_auc_score: 0.8184
72
+ 2025-09-26 03:30:49,096 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 49/100 | Train Loss: 0.0079 | Val mean-roc_auc_score: 0.8133
73
+ 2025-09-26 03:30:52,521 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 50/100 | Train Loss: 0.0060 | Val mean-roc_auc_score: 0.8103
74
+ 2025-09-26 03:30:58,126 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 51/100 | Train Loss: 0.0055 | Val mean-roc_auc_score: 0.8056
75
+ 2025-09-26 03:31:04,488 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 52/100 | Train Loss: 0.0148 | Val mean-roc_auc_score: 0.8094
76
+ 2025-09-26 03:31:10,528 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 53/100 | Train Loss: 0.0078 | Val mean-roc_auc_score: 0.8155
77
+ 2025-09-26 03:31:16,282 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 54/100 | Train Loss: 0.0064 | Val mean-roc_auc_score: 0.8174
78
+ 2025-09-26 03:31:20,908 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 55/100 | Train Loss: 0.0052 | Val mean-roc_auc_score: 0.8182
79
+ 2025-09-26 03:31:26,676 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 56/100 | Train Loss: 0.0046 | Val mean-roc_auc_score: 0.8135
80
+ 2025-09-26 03:31:33,278 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 57/100 | Train Loss: 0.0038 | Val mean-roc_auc_score: 0.8134
81
+ 2025-09-26 03:31:39,557 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 58/100 | Train Loss: 0.0087 | Val mean-roc_auc_score: 0.8167
82
+ 2025-09-26 03:31:45,267 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 59/100 | Train Loss: 0.0033 | Val mean-roc_auc_score: 0.8199
83
+ 2025-09-26 03:31:48,633 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 60/100 | Train Loss: 0.0063 | Val mean-roc_auc_score: 0.8201
84
+ 2025-09-26 03:31:54,464 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 61/100 | Train Loss: 0.0067 | Val mean-roc_auc_score: 0.8350
85
+ 2025-09-26 03:32:00,581 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 62/100 | Train Loss: 0.0416 | Val mean-roc_auc_score: 0.7687
86
+ 2025-09-26 03:32:06,607 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 63/100 | Train Loss: 0.0454 | Val mean-roc_auc_score: 0.7591
87
+ 2025-09-26 03:32:12,546 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 64/100 | Train Loss: 0.0241 | Val mean-roc_auc_score: 0.7579
88
+ 2025-09-26 03:32:18,592 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 65/100 | Train Loss: 0.0093 | Val mean-roc_auc_score: 0.7688
89
+ 2025-09-26 03:32:21,898 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 66/100 | Train Loss: 0.0060 | Val mean-roc_auc_score: 0.7730
90
+ 2025-09-26 03:32:28,034 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 67/100 | Train Loss: 0.0047 | Val mean-roc_auc_score: 0.7734
91
+ 2025-09-26 03:32:33,466 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 68/100 | Train Loss: 0.0050 | Val mean-roc_auc_score: 0.7739
92
+ 2025-09-26 03:32:39,191 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 69/100 | Train Loss: 0.0042 | Val mean-roc_auc_score: 0.7736
93
+ 2025-09-26 03:32:44,700 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 70/100 | Train Loss: 0.0053 | Val mean-roc_auc_score: 0.7792
94
+ 2025-09-26 03:32:47,724 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 71/100 | Train Loss: 0.0041 | Val mean-roc_auc_score: 0.7787
95
+ 2025-09-26 03:32:53,684 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 72/100 | Train Loss: 0.0101 | Val mean-roc_auc_score: 0.7779
96
+ 2025-09-26 03:32:59,151 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 73/100 | Train Loss: 0.0052 | Val mean-roc_auc_score: 0.7942
97
+ 2025-09-26 03:33:05,106 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 74/100 | Train Loss: 0.0077 | Val mean-roc_auc_score: 0.7844
98
+ 2025-09-26 03:33:10,666 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 75/100 | Train Loss: 0.0040 | Val mean-roc_auc_score: 0.7882
99
+ 2025-09-26 03:33:16,116 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 76/100 | Train Loss: 0.0029 | Val mean-roc_auc_score: 0.7892
100
+ 2025-09-26 03:33:19,706 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 77/100 | Train Loss: 0.0031 | Val mean-roc_auc_score: 0.7877
101
+ 2025-09-26 03:33:25,374 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 78/100 | Train Loss: 0.0038 | Val mean-roc_auc_score: 0.7932
102
+ 2025-09-26 03:33:31,085 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 79/100 | Train Loss: 0.0027 | Val mean-roc_auc_score: 0.7936
103
+ 2025-09-26 03:33:36,844 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 80/100 | Train Loss: 0.0022 | Val mean-roc_auc_score: 0.7942
104
+ 2025-09-26 03:33:42,347 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 81/100 | Train Loss: 0.0024 | Val mean-roc_auc_score: 0.7948
105
+ 2025-09-26 03:33:46,900 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 82/100 | Train Loss: 0.0026 | Val mean-roc_auc_score: 0.7933
106
+ 2025-09-26 03:33:52,212 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 83/100 | Train Loss: 0.0022 | Val mean-roc_auc_score: 0.7936
107
+ 2025-09-26 03:33:58,265 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 84/100 | Train Loss: 0.0031 | Val mean-roc_auc_score: 0.7934
108
+ 2025-09-26 03:34:03,979 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 85/100 | Train Loss: 0.0028 | Val mean-roc_auc_score: 0.7914
109
+ 2025-09-26 03:34:09,571 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 86/100 | Train Loss: 0.0030 | Val mean-roc_auc_score: 0.7935
110
+ 2025-09-26 03:34:15,632 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 87/100 | Train Loss: 0.0019 | Val mean-roc_auc_score: 0.7964
111
+ 2025-09-26 03:34:18,992 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 88/100 | Train Loss: 0.0026 | Val mean-roc_auc_score: 0.7935
112
+ 2025-09-26 03:34:24,586 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 89/100 | Train Loss: 0.0042 | Val mean-roc_auc_score: 0.7987
113
+ 2025-09-26 03:34:30,124 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 90/100 | Train Loss: 0.0115 | Val mean-roc_auc_score: 0.8019
114
+ 2025-09-26 03:34:35,772 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 91/100 | Train Loss: 0.0065 | Val mean-roc_auc_score: 0.7909
115
+ 2025-09-26 03:34:41,813 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 92/100 | Train Loss: 0.0039 | Val mean-roc_auc_score: 0.7894
116
+ 2025-09-26 03:34:47,422 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 93/100 | Train Loss: 0.0020 | Val mean-roc_auc_score: 0.7897
117
+ 2025-09-26 03:34:50,505 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 94/100 | Train Loss: 0.0025 | Val mean-roc_auc_score: 0.7902
118
+ 2025-09-26 03:34:56,557 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 95/100 | Train Loss: 0.0044 | Val mean-roc_auc_score: 0.7936
119
+ 2025-09-26 03:35:01,972 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 96/100 | Train Loss: 0.0038 | Val mean-roc_auc_score: 0.7948
120
+ 2025-09-26 03:35:08,009 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 97/100 | Train Loss: 0.0018 | Val mean-roc_auc_score: 0.7948
121
+ 2025-09-26 03:35:13,476 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 98/100 | Train Loss: 0.0026 | Val mean-roc_auc_score: 0.7926
122
+ 2025-09-26 03:35:16,383 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 99/100 | Train Loss: 0.0029 | Val mean-roc_auc_score: 0.7894
123
+ 2025-09-26 03:35:21,942 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 100/100 | Train Loss: 0.0019 | Val mean-roc_auc_score: 0.7915
124
+ 2025-09-26 03:35:22,509 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Test mean-roc_auc_score: 0.8123
125
+ 2025-09-26 03:35:22,864 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Starting triplicate run 2 for dataset cocrystal at 2025-09-26_03-35-22
126
+ 2025-09-26 03:35:27,735 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 1/100 | Train Loss: 0.5912 | Val mean-roc_auc_score: 0.7997
127
+ 2025-09-26 03:35:27,735 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Global step of best model: 37
128
+ 2025-09-26 03:35:28,391 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Best model saved at epoch 1 with val mean-roc_auc_score: 0.7997
129
+ 2025-09-26 03:35:33,915 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 2/100 | Train Loss: 0.4628 | Val mean-roc_auc_score: 0.8185
130
+ 2025-09-26 03:35:34,130 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Global step of best model: 74
131
+ 2025-09-26 03:35:34,750 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Best model saved at epoch 2 with val mean-roc_auc_score: 0.8185
132
+ 2025-09-26 03:35:40,569 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 3/100 | Train Loss: 0.4432 | Val mean-roc_auc_score: 0.8515
133
+ 2025-09-26 03:35:40,781 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Global step of best model: 111
134
+ 2025-09-26 03:35:41,477 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Best model saved at epoch 3 with val mean-roc_auc_score: 0.8515
135
+ 2025-09-26 03:35:45,080 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 4/100 | Train Loss: 0.3733 | Val mean-roc_auc_score: 0.8493
136
+ 2025-09-26 03:35:51,088 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 5/100 | Train Loss: 0.3564 | Val mean-roc_auc_score: 0.8315
137
+ 2025-09-26 03:35:56,820 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 6/100 | Train Loss: 0.3168 | Val mean-roc_auc_score: 0.8680
138
+ 2025-09-26 03:35:57,375 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Global step of best model: 222
139
+ 2025-09-26 03:35:58,044 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Best model saved at epoch 6 with val mean-roc_auc_score: 0.8680
140
+ 2025-09-26 03:36:03,505 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 7/100 | Train Loss: 0.3024 | Val mean-roc_auc_score: 0.8557
141
+ 2025-09-26 03:36:08,763 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 8/100 | Train Loss: 0.2990 | Val mean-roc_auc_score: 0.8606
142
+ 2025-09-26 03:36:14,217 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 9/100 | Train Loss: 0.2841 | Val mean-roc_auc_score: 0.8528
143
+ 2025-09-26 03:36:17,709 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 10/100 | Train Loss: 0.2703 | Val mean-roc_auc_score: 0.8963
144
+ 2025-09-26 03:36:17,924 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Global step of best model: 370
145
+ 2025-09-26 03:36:18,556 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Best model saved at epoch 10 with val mean-roc_auc_score: 0.8963
146
+ 2025-09-26 03:36:24,239 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 11/100 | Train Loss: 0.2310 | Val mean-roc_auc_score: 0.8755
147
+ 2025-09-26 03:36:30,123 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 12/100 | Train Loss: 0.2416 | Val mean-roc_auc_score: 0.8583
148
+ 2025-09-26 03:36:35,664 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 13/100 | Train Loss: 0.2095 | Val mean-roc_auc_score: 0.8596
149
+ 2025-09-26 03:36:41,386 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 14/100 | Train Loss: 0.1849 | Val mean-roc_auc_score: 0.8634
150
+ 2025-09-26 03:36:46,905 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 15/100 | Train Loss: 0.1731 | Val mean-roc_auc_score: 0.8671
151
+ 2025-09-26 03:36:49,794 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 16/100 | Train Loss: 0.1799 | Val mean-roc_auc_score: 0.8665
152
+ 2025-09-26 03:36:55,751 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 17/100 | Train Loss: 0.1897 | Val mean-roc_auc_score: 0.8714
153
+ 2025-09-26 03:37:01,185 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 18/100 | Train Loss: 0.1647 | Val mean-roc_auc_score: 0.8581
154
+ 2025-09-26 03:37:06,545 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 19/100 | Train Loss: 0.1667 | Val mean-roc_auc_score: 0.8413
155
+ 2025-09-26 03:37:12,134 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 20/100 | Train Loss: 0.1470 | Val mean-roc_auc_score: 0.8621
156
+ 2025-09-26 03:37:15,329 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 21/100 | Train Loss: 0.1292 | Val mean-roc_auc_score: 0.8530
157
+ 2025-09-26 03:37:21,083 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 22/100 | Train Loss: 0.1077 | Val mean-roc_auc_score: 0.8396
158
+ 2025-09-26 03:37:26,279 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 23/100 | Train Loss: 0.0959 | Val mean-roc_auc_score: 0.8477
159
+ 2025-09-26 03:37:31,693 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 24/100 | Train Loss: 0.0680 | Val mean-roc_auc_score: 0.8312
160
+ 2025-09-26 03:37:37,433 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 25/100 | Train Loss: 0.1225 | Val mean-roc_auc_score: 0.8102
161
+ 2025-09-26 03:37:43,130 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 26/100 | Train Loss: 0.0688 | Val mean-roc_auc_score: 0.8537
162
+ 2025-09-26 03:37:47,570 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 27/100 | Train Loss: 0.0549 | Val mean-roc_auc_score: 0.8554
163
+ 2025-09-26 03:37:53,645 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 28/100 | Train Loss: 0.0590 | Val mean-roc_auc_score: 0.8301
164
+ 2025-09-26 03:37:59,466 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 29/100 | Train Loss: 0.0467 | Val mean-roc_auc_score: 0.8633
165
+ 2025-09-26 03:38:05,610 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 30/100 | Train Loss: 0.0453 | Val mean-roc_auc_score: 0.8503
166
+ 2025-09-26 03:38:11,312 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 31/100 | Train Loss: 0.0384 | Val mean-roc_auc_score: 0.8476
167
+ 2025-09-26 03:38:15,001 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 32/100 | Train Loss: 0.0359 | Val mean-roc_auc_score: 0.8632
168
+ 2025-09-26 03:38:21,052 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 33/100 | Train Loss: 0.0830 | Val mean-roc_auc_score: 0.8201
169
+ 2025-09-26 03:38:26,999 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 34/100 | Train Loss: 0.0625 | Val mean-roc_auc_score: 0.8224
170
+ 2025-09-26 03:38:33,050 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 35/100 | Train Loss: 0.0269 | Val mean-roc_auc_score: 0.8393
171
+ 2025-09-26 03:38:39,381 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 36/100 | Train Loss: 0.0209 | Val mean-roc_auc_score: 0.8332
172
+ 2025-09-26 03:38:45,959 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 37/100 | Train Loss: 0.0173 | Val mean-roc_auc_score: 0.8392
173
+ 2025-09-26 03:38:49,617 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 38/100 | Train Loss: 0.0115 | Val mean-roc_auc_score: 0.8407
174
+ 2025-09-26 03:38:55,576 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 39/100 | Train Loss: 0.0117 | Val mean-roc_auc_score: 0.8433
175
+ 2025-09-26 03:39:01,370 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 40/100 | Train Loss: 0.0095 | Val mean-roc_auc_score: 0.8461
176
+ 2025-09-26 03:39:07,286 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 41/100 | Train Loss: 0.0138 | Val mean-roc_auc_score: 0.8542
177
+ 2025-09-26 03:39:13,764 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 42/100 | Train Loss: 0.0120 | Val mean-roc_auc_score: 0.8477
178
+ 2025-09-26 03:39:17,133 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 43/100 | Train Loss: 0.0081 | Val mean-roc_auc_score: 0.8450
179
+ 2025-09-26 03:39:22,783 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 44/100 | Train Loss: 0.0079 | Val mean-roc_auc_score: 0.8491
180
+ 2025-09-26 03:39:28,503 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 45/100 | Train Loss: 0.0134 | Val mean-roc_auc_score: 0.8406
181
+ 2025-09-26 03:39:34,344 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 46/100 | Train Loss: 0.0114 | Val mean-roc_auc_score: 0.8434
182
+ 2025-09-26 03:39:40,546 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 47/100 | Train Loss: 0.0094 | Val mean-roc_auc_score: 0.8357
183
+ 2025-09-26 03:39:43,506 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 48/100 | Train Loss: 0.0389 | Val mean-roc_auc_score: 0.8148
184
+ 2025-09-26 03:39:49,309 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 49/100 | Train Loss: 0.0472 | Val mean-roc_auc_score: 0.8316
185
+ 2025-09-26 03:39:54,369 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 50/100 | Train Loss: 0.0773 | Val mean-roc_auc_score: 0.8109
186
+ 2025-09-26 03:39:59,945 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 51/100 | Train Loss: 0.0633 | Val mean-roc_auc_score: 0.8389
187
+ 2025-09-26 03:40:05,947 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 52/100 | Train Loss: 0.0254 | Val mean-roc_auc_score: 0.8171
188
+ 2025-09-26 03:40:11,649 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 53/100 | Train Loss: 0.0208 | Val mean-roc_auc_score: 0.8212
189
+ 2025-09-26 03:40:15,223 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 54/100 | Train Loss: 0.0105 | Val mean-roc_auc_score: 0.8229
190
+ 2025-09-26 03:40:22,858 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 55/100 | Train Loss: 0.0121 | Val mean-roc_auc_score: 0.8258
191
+ 2025-09-26 03:40:29,171 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 56/100 | Train Loss: 0.0121 | Val mean-roc_auc_score: 0.8272
192
+ 2025-09-26 03:40:36,009 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 57/100 | Train Loss: 0.0157 | Val mean-roc_auc_score: 0.8318
193
+ 2025-09-26 03:40:41,517 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 58/100 | Train Loss: 0.0068 | Val mean-roc_auc_score: 0.8326
194
+ 2025-09-26 03:40:44,783 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 59/100 | Train Loss: 0.0073 | Val mean-roc_auc_score: 0.8259
195
+ 2025-09-26 03:40:50,239 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 60/100 | Train Loss: 0.0062 | Val mean-roc_auc_score: 0.8242
196
+ 2025-09-26 03:40:55,662 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 61/100 | Train Loss: 0.0079 | Val mean-roc_auc_score: 0.8325
197
+ 2025-09-26 03:41:01,377 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 62/100 | Train Loss: 0.0102 | Val mean-roc_auc_score: 0.8405
198
+ 2025-09-26 03:41:07,015 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 63/100 | Train Loss: 0.0188 | Val mean-roc_auc_score: 0.7836
199
+ 2025-09-26 03:41:12,754 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 64/100 | Train Loss: 0.0298 | Val mean-roc_auc_score: 0.7852
200
+ 2025-09-26 03:41:16,361 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 65/100 | Train Loss: 0.0330 | Val mean-roc_auc_score: 0.7926
201
+ 2025-09-26 03:41:22,362 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 66/100 | Train Loss: 0.0308 | Val mean-roc_auc_score: 0.8130
202
+ 2025-09-26 03:41:28,763 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 67/100 | Train Loss: 0.0164 | Val mean-roc_auc_score: 0.7922
203
+ 2025-09-26 03:41:34,405 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 68/100 | Train Loss: 0.0123 | Val mean-roc_auc_score: 0.7898
204
+ 2025-09-26 03:41:40,240 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 69/100 | Train Loss: 0.0074 | Val mean-roc_auc_score: 0.7878
205
+ 2025-09-26 03:41:43,604 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 70/100 | Train Loss: 0.0084 | Val mean-roc_auc_score: 0.7892
206
+ 2025-09-26 03:41:49,419 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 71/100 | Train Loss: 0.0089 | Val mean-roc_auc_score: 0.7883
207
+ 2025-09-26 03:41:55,600 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 72/100 | Train Loss: 0.0071 | Val mean-roc_auc_score: 0.7884
208
+ 2025-09-26 03:42:01,179 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 73/100 | Train Loss: 0.0011 | Val mean-roc_auc_score: 0.7883
209
+ 2025-09-26 03:42:06,654 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 74/100 | Train Loss: 0.0063 | Val mean-roc_auc_score: 0.7920
210
+ 2025-09-26 03:42:12,405 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 75/100 | Train Loss: 0.0050 | Val mean-roc_auc_score: 0.7906
211
+ 2025-09-26 03:42:15,670 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 76/100 | Train Loss: 0.0064 | Val mean-roc_auc_score: 0.7893
212
+ 2025-09-26 03:42:21,632 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 77/100 | Train Loss: 0.0041 | Val mean-roc_auc_score: 0.7904
213
+ 2025-09-26 03:42:26,710 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 78/100 | Train Loss: 0.0039 | Val mean-roc_auc_score: 0.7905
214
+ 2025-09-26 03:42:32,200 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 79/100 | Train Loss: 0.0044 | Val mean-roc_auc_score: 0.7908
215
+ 2025-09-26 03:42:37,850 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 80/100 | Train Loss: 0.0068 | Val mean-roc_auc_score: 0.7904
216
+ 2025-09-26 03:42:43,743 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 81/100 | Train Loss: 0.0074 | Val mean-roc_auc_score: 0.7884
217
+ 2025-09-26 03:42:48,720 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 82/100 | Train Loss: 0.0079 | Val mean-roc_auc_score: 0.7962
218
+ 2025-09-26 03:42:53,910 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 83/100 | Train Loss: 0.0055 | Val mean-roc_auc_score: 0.7947
219
+ 2025-09-26 03:42:59,806 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 84/100 | Train Loss: 0.0029 | Val mean-roc_auc_score: 0.7954
220
+ 2025-09-26 03:43:05,101 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 85/100 | Train Loss: 0.0051 | Val mean-roc_auc_score: 0.7967
221
+ 2025-09-26 03:43:10,661 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 86/100 | Train Loss: 0.0035 | Val mean-roc_auc_score: 0.7998
222
+ 2025-09-26 03:43:14,219 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 87/100 | Train Loss: 0.0036 | Val mean-roc_auc_score: 0.8009
223
+ 2025-09-26 03:43:19,963 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 88/100 | Train Loss: 0.0027 | Val mean-roc_auc_score: 0.8006
224
+ 2025-09-26 03:43:25,328 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 89/100 | Train Loss: 0.0028 | Val mean-roc_auc_score: 0.8009
225
+ 2025-09-26 03:43:30,834 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 90/100 | Train Loss: 0.0198 | Val mean-roc_auc_score: 0.7970
226
+ 2025-09-26 03:43:36,388 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 91/100 | Train Loss: 0.0091 | Val mean-roc_auc_score: 0.8002
227
+ 2025-09-26 03:43:42,490 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 92/100 | Train Loss: 0.0020 | Val mean-roc_auc_score: 0.8026
228
+ 2025-09-26 03:43:45,441 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 93/100 | Train Loss: 0.0053 | Val mean-roc_auc_score: 0.8028
229
+ 2025-09-26 03:43:50,804 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 94/100 | Train Loss: 0.0041 | Val mean-roc_auc_score: 0.8041
230
+ 2025-09-26 03:43:56,256 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 95/100 | Train Loss: 0.0097 | Val mean-roc_auc_score: 0.7976
231
+ 2025-09-26 03:44:01,902 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 96/100 | Train Loss: 0.0112 | Val mean-roc_auc_score: 0.7873
232
+ 2025-09-26 03:44:08,212 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 97/100 | Train Loss: 0.0060 | Val mean-roc_auc_score: 0.7932
233
+ 2025-09-26 03:44:11,072 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 98/100 | Train Loss: 0.0032 | Val mean-roc_auc_score: 0.7962
234
+ 2025-09-26 03:44:16,485 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 99/100 | Train Loss: 0.0038 | Val mean-roc_auc_score: 0.7950
235
+ 2025-09-26 03:44:22,162 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 100/100 | Train Loss: 0.0056 | Val mean-roc_auc_score: 0.7927
236
+ 2025-09-26 03:44:22,665 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Test mean-roc_auc_score: 0.8471
237
+ 2025-09-26 03:44:23,031 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Starting triplicate run 3 for dataset cocrystal at 2025-09-26_03-44-23
238
+ 2025-09-26 03:44:27,771 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 1/100 | Train Loss: 0.6622 | Val mean-roc_auc_score: 0.7453
239
+ 2025-09-26 03:44:27,771 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Global step of best model: 37
240
+ 2025-09-26 03:44:28,528 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Best model saved at epoch 1 with val mean-roc_auc_score: 0.7453
241
+ 2025-09-26 03:44:34,279 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 2/100 | Train Loss: 0.4797 | Val mean-roc_auc_score: 0.7941
242
+ 2025-09-26 03:44:34,586 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Global step of best model: 74
243
+ 2025-09-26 03:44:35,437 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Best model saved at epoch 2 with val mean-roc_auc_score: 0.7941
244
+ 2025-09-26 03:44:41,399 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 3/100 | Train Loss: 0.4006 | Val mean-roc_auc_score: 0.8502
245
+ 2025-09-26 03:44:41,601 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Global step of best model: 111
246
+ 2025-09-26 03:44:42,254 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Best model saved at epoch 3 with val mean-roc_auc_score: 0.8502
247
+ 2025-09-26 03:44:46,179 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 4/100 | Train Loss: 0.4223 | Val mean-roc_auc_score: 0.8284
248
+ 2025-09-26 03:44:51,458 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 5/100 | Train Loss: 0.3750 | Val mean-roc_auc_score: 0.8517
249
+ 2025-09-26 03:44:51,667 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Global step of best model: 185
250
+ 2025-09-26 03:44:52,356 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Best model saved at epoch 5 with val mean-roc_auc_score: 0.8517
251
+ 2025-09-26 03:44:58,130 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 6/100 | Train Loss: 0.3750 | Val mean-roc_auc_score: 0.8461
252
+ 2025-09-26 03:45:04,377 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 7/100 | Train Loss: 0.3480 | Val mean-roc_auc_score: 0.8428
253
+ 2025-09-26 03:45:09,297 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 8/100 | Train Loss: 0.3226 | Val mean-roc_auc_score: 0.8455
254
+ 2025-09-26 03:45:12,162 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 9/100 | Train Loss: 0.2860 | Val mean-roc_auc_score: 0.8551
255
+ 2025-09-26 03:45:12,370 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Global step of best model: 333
256
+ 2025-09-26 03:45:13,011 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Best model saved at epoch 9 with val mean-roc_auc_score: 0.8551
257
+ 2025-09-26 03:45:18,281 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 10/100 | Train Loss: 0.2686 | Val mean-roc_auc_score: 0.8756
258
+ 2025-09-26 03:45:18,502 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Global step of best model: 370
259
+ 2025-09-26 03:45:19,165 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Best model saved at epoch 10 with val mean-roc_auc_score: 0.8756
260
+ 2025-09-26 03:45:24,470 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 11/100 | Train Loss: 0.2846 | Val mean-roc_auc_score: 0.8445
261
+ 2025-09-26 03:45:30,139 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 12/100 | Train Loss: 0.2584 | Val mean-roc_auc_score: 0.8399
262
+ 2025-09-26 03:45:35,141 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 13/100 | Train Loss: 0.2230 | Val mean-roc_auc_score: 0.8369
263
+ 2025-09-26 03:45:40,909 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 14/100 | Train Loss: 0.2483 | Val mean-roc_auc_score: 0.8368
264
+ 2025-09-26 03:45:43,822 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 15/100 | Train Loss: 0.2019 | Val mean-roc_auc_score: 0.8365
265
+ 2025-09-26 03:45:49,430 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 16/100 | Train Loss: 0.1782 | Val mean-roc_auc_score: 0.8253
266
+ 2025-09-26 03:45:55,337 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 17/100 | Train Loss: 0.1789 | Val mean-roc_auc_score: 0.8573
267
+ 2025-09-26 03:46:00,774 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 18/100 | Train Loss: 0.1799 | Val mean-roc_auc_score: 0.8534
268
+ 2025-09-26 03:46:06,488 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 19/100 | Train Loss: 0.1100 | Val mean-roc_auc_score: 0.8272
269
+ 2025-09-26 03:46:12,111 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 20/100 | Train Loss: 0.1284 | Val mean-roc_auc_score: 0.8378
270
+ 2025-09-26 03:46:15,455 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 21/100 | Train Loss: 0.1360 | Val mean-roc_auc_score: 0.8580
271
+ 2025-09-26 03:46:21,506 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 22/100 | Train Loss: 0.1099 | Val mean-roc_auc_score: 0.8359
272
+ 2025-09-26 03:46:27,313 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 23/100 | Train Loss: 0.0891 | Val mean-roc_auc_score: 0.8160
273
+ 2025-09-26 03:46:33,036 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 24/100 | Train Loss: 0.0992 | Val mean-roc_auc_score: 0.8296
274
+ 2025-09-26 03:46:38,596 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 25/100 | Train Loss: 0.0925 | Val mean-roc_auc_score: 0.8487
275
+ 2025-09-26 03:46:42,019 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 26/100 | Train Loss: 0.1360 | Val mean-roc_auc_score: 0.8322
276
+ 2025-09-26 03:46:48,976 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 27/100 | Train Loss: 0.0777 | Val mean-roc_auc_score: 0.8293
277
+ 2025-09-26 03:46:54,627 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 28/100 | Train Loss: 0.0543 | Val mean-roc_auc_score: 0.8048
278
+ 2025-09-26 03:47:00,258 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 29/100 | Train Loss: 0.0382 | Val mean-roc_auc_score: 0.8263
279
+ 2025-09-26 03:47:05,742 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 30/100 | Train Loss: 0.0441 | Val mean-roc_auc_score: 0.8126
280
+ 2025-09-26 03:47:11,039 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 31/100 | Train Loss: 0.0437 | Val mean-roc_auc_score: 0.7836
281
+ 2025-09-26 03:47:14,538 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 32/100 | Train Loss: 0.0357 | Val mean-roc_auc_score: 0.8157
282
+ 2025-09-26 03:47:20,142 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 33/100 | Train Loss: 0.0348 | Val mean-roc_auc_score: 0.7890
283
+ 2025-09-26 03:47:25,563 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 34/100 | Train Loss: 0.0283 | Val mean-roc_auc_score: 0.8203
284
+ 2025-09-26 03:47:31,041 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 35/100 | Train Loss: 0.0269 | Val mean-roc_auc_score: 0.8190
285
+ 2025-09-26 03:47:36,733 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 36/100 | Train Loss: 0.0181 | Val mean-roc_auc_score: 0.8206
286
+ 2025-09-26 03:47:40,237 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 37/100 | Train Loss: 0.0127 | Val mean-roc_auc_score: 0.8093
287
+ 2025-09-26 03:47:45,581 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 38/100 | Train Loss: 0.0599 | Val mean-roc_auc_score: 0.8173
288
+ 2025-09-26 03:47:51,184 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 39/100 | Train Loss: 0.0604 | Val mean-roc_auc_score: 0.8082
289
+ 2025-09-26 03:47:56,515 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 40/100 | Train Loss: 0.0667 | Val mean-roc_auc_score: 0.7935
290
+ 2025-09-26 03:48:02,260 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 41/100 | Train Loss: 0.0256 | Val mean-roc_auc_score: 0.7924
291
+ 2025-09-26 03:48:08,840 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 42/100 | Train Loss: 0.0168 | Val mean-roc_auc_score: 0.7861
292
+ 2025-09-26 03:48:12,088 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 43/100 | Train Loss: 0.0101 | Val mean-roc_auc_score: 0.7822
293
+ 2025-09-26 03:48:18,312 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 44/100 | Train Loss: 0.0098 | Val mean-roc_auc_score: 0.7792
294
+ 2025-09-26 03:48:24,369 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 45/100 | Train Loss: 0.0067 | Val mean-roc_auc_score: 0.7809
295
+ 2025-09-26 03:48:30,135 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 46/100 | Train Loss: 0.0016 | Val mean-roc_auc_score: 0.7729
296
+ 2025-09-26 03:48:36,792 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 47/100 | Train Loss: 0.0082 | Val mean-roc_auc_score: 0.7808
297
+ 2025-09-26 03:48:40,204 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 48/100 | Train Loss: 0.0095 | Val mean-roc_auc_score: 0.7814
298
+ 2025-09-26 03:48:45,964 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 49/100 | Train Loss: 0.0054 | Val mean-roc_auc_score: 0.7824
299
+ 2025-09-26 03:48:51,464 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 50/100 | Train Loss: 0.0057 | Val mean-roc_auc_score: 0.7822
300
+ 2025-09-26 03:48:56,962 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 51/100 | Train Loss: 0.0069 | Val mean-roc_auc_score: 0.7891
301
+ 2025-09-26 03:49:02,880 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 52/100 | Train Loss: 0.0062 | Val mean-roc_auc_score: 0.7946
302
+ 2025-09-26 03:49:08,932 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 53/100 | Train Loss: 0.0046 | Val mean-roc_auc_score: 0.7908
303
+ 2025-09-26 03:49:12,095 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 54/100 | Train Loss: 0.0058 | Val mean-roc_auc_score: 0.7930
304
+ 2025-09-26 03:49:19,081 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 55/100 | Train Loss: 0.0027 | Val mean-roc_auc_score: 0.7929
305
+ 2025-09-26 03:49:24,656 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 56/100 | Train Loss: 0.0042 | Val mean-roc_auc_score: 0.7911
306
+ 2025-09-26 03:49:30,918 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 57/100 | Train Loss: 0.0029 | Val mean-roc_auc_score: 0.7964
307
+ 2025-09-26 03:49:36,209 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 58/100 | Train Loss: 0.0214 | Val mean-roc_auc_score: 0.7869
308
+ 2025-09-26 03:49:39,184 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 59/100 | Train Loss: 0.0393 | Val mean-roc_auc_score: 0.7942
309
+ 2025-09-26 03:49:44,325 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 60/100 | Train Loss: 0.0307 | Val mean-roc_auc_score: 0.8052
310
+ 2025-09-26 03:49:50,082 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 61/100 | Train Loss: 0.0207 | Val mean-roc_auc_score: 0.8097
311
+ 2025-09-26 03:49:56,438 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 62/100 | Train Loss: 0.0085 | Val mean-roc_auc_score: 0.8116
312
+ 2025-09-26 03:50:01,751 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 63/100 | Train Loss: 0.0046 | Val mean-roc_auc_score: 0.8128
313
+ 2025-09-26 03:50:07,363 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 64/100 | Train Loss: 0.0061 | Val mean-roc_auc_score: 0.8178
314
+ 2025-09-26 03:50:10,115 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 65/100 | Train Loss: 0.0024 | Val mean-roc_auc_score: 0.8186
315
+ 2025-09-26 03:50:15,871 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 66/100 | Train Loss: 0.0040 | Val mean-roc_auc_score: 0.8171
316
+ 2025-09-26 03:50:21,825 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 67/100 | Train Loss: 0.0035 | Val mean-roc_auc_score: 0.8160
317
+ 2025-09-26 03:50:27,146 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 68/100 | Train Loss: 0.0044 | Val mean-roc_auc_score: 0.8151
318
+ 2025-09-26 03:50:32,573 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 69/100 | Train Loss: 0.0027 | Val mean-roc_auc_score: 0.8125
319
+ 2025-09-26 03:50:37,952 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 70/100 | Train Loss: 0.0031 | Val mean-roc_auc_score: 0.8101
320
+ 2025-09-26 03:50:41,067 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 71/100 | Train Loss: 0.0034 | Val mean-roc_auc_score: 0.8107
321
+ 2025-09-26 03:50:47,062 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 72/100 | Train Loss: 0.0018 | Val mean-roc_auc_score: 0.8104
322
+ 2025-09-26 03:50:52,279 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 73/100 | Train Loss: 0.0012 | Val mean-roc_auc_score: 0.8103
323
+ 2025-09-26 03:50:57,753 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 74/100 | Train Loss: 0.0025 | Val mean-roc_auc_score: 0.8083
324
+ 2025-09-26 03:51:03,580 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 75/100 | Train Loss: 0.0018 | Val mean-roc_auc_score: 0.8074
325
+ 2025-09-26 03:51:08,938 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 76/100 | Train Loss: 0.0015 | Val mean-roc_auc_score: 0.8073
326
+ 2025-09-26 03:51:12,243 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 77/100 | Train Loss: 0.0032 | Val mean-roc_auc_score: 0.8015
327
+ 2025-09-26 03:51:17,570 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 78/100 | Train Loss: 0.0020 | Val mean-roc_auc_score: 0.8009
328
+ 2025-09-26 03:51:23,508 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 79/100 | Train Loss: 0.0020 | Val mean-roc_auc_score: 0.8023
329
+ 2025-09-26 03:51:29,068 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 80/100 | Train Loss: 0.0021 | Val mean-roc_auc_score: 0.8018
330
+ 2025-09-26 03:51:34,690 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 81/100 | Train Loss: 0.0013 | Val mean-roc_auc_score: 0.8023
331
+ 2025-09-26 03:51:39,441 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 82/100 | Train Loss: 0.0021 | Val mean-roc_auc_score: 0.8046
332
+ 2025-09-26 03:51:44,832 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 83/100 | Train Loss: 0.0025 | Val mean-roc_auc_score: 0.8042
333
+ 2025-09-26 03:51:50,732 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 84/100 | Train Loss: 0.0601 | Val mean-roc_auc_score: 0.7986
334
+ 2025-09-26 03:51:56,162 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 85/100 | Train Loss: 0.0083 | Val mean-roc_auc_score: 0.8014
335
+ 2025-09-26 03:52:01,425 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 86/100 | Train Loss: 0.0173 | Val mean-roc_auc_score: 0.8032
336
+ 2025-09-26 03:52:07,230 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 87/100 | Train Loss: 0.0128 | Val mean-roc_auc_score: 0.8013
337
+ 2025-09-26 03:52:10,322 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 88/100 | Train Loss: 0.0101 | Val mean-roc_auc_score: 0.7977
338
+ 2025-09-26 03:52:15,818 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 89/100 | Train Loss: 0.0076 | Val mean-roc_auc_score: 0.7927
339
+ 2025-09-26 03:52:21,248 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 90/100 | Train Loss: 0.0089 | Val mean-roc_auc_score: 0.7875
340
+ 2025-09-26 03:52:26,960 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 91/100 | Train Loss: 0.0063 | Val mean-roc_auc_score: 0.7847
341
+ 2025-09-26 03:52:32,919 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 92/100 | Train Loss: 0.0022 | Val mean-roc_auc_score: 0.7823
342
+ 2025-09-26 03:52:38,456 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 93/100 | Train Loss: 0.0065 | Val mean-roc_auc_score: 0.7828
343
+ 2025-09-26 03:52:41,813 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 94/100 | Train Loss: 0.0075 | Val mean-roc_auc_score: 0.7779
344
+ 2025-09-26 03:52:47,561 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 95/100 | Train Loss: 0.0017 | Val mean-roc_auc_score: 0.7756
345
+ 2025-09-26 03:52:53,744 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 96/100 | Train Loss: 0.0045 | Val mean-roc_auc_score: 0.7739
346
+ 2025-09-26 03:53:00,082 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 97/100 | Train Loss: 0.0039 | Val mean-roc_auc_score: 0.7752
347
+ 2025-09-26 03:53:05,718 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 98/100 | Train Loss: 0.0022 | Val mean-roc_auc_score: 0.7748
348
+ 2025-09-26 03:53:09,014 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 99/100 | Train Loss: 0.0042 | Val mean-roc_auc_score: 0.7752
349
+ 2025-09-26 03:53:14,654 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 100/100 | Train Loss: 0.0034 | Val mean-roc_auc_score: 0.7763
350
+ 2025-09-26 03:53:15,166 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Test mean-roc_auc_score: 0.8269
351
+ 2025-09-26 03:53:15,585 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Final Triplicate Test Results — Avg mean-roc_auc_score: 0.8288, Std Dev: 0.0143
logs_modchembert_classification_ModChemBERT-MLM-DAPT/modchembert_deepchem_splits_run_covid19_epochs100_batch_size32_20250925_210847.log ADDED
@@ -0,0 +1,347 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ 2025-09-25 21:08:47,057 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Running benchmark for dataset: covid19
2
+ 2025-09-25 21:08:47,057 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - dataset: covid19, tasks: ['label'], epochs: 100, learning rate: 3e-05
3
+ 2025-09-25 21:08:47,062 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Starting triplicate run 1 for dataset covid19 at 2025-09-25_21-08-47
4
+ 2025-09-25 21:08:56,284 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 1/100 | Train Loss: 0.5962 | Val mean-roc_auc_score: 0.7684
5
+ 2025-09-25 21:08:56,285 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Global step of best model: 65
6
+ 2025-09-25 21:08:57,411 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Best model saved at epoch 1 with val mean-roc_auc_score: 0.7684
7
+ 2025-09-25 21:09:08,134 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 2/100 | Train Loss: 0.4479 | Val mean-roc_auc_score: 0.8187
8
+ 2025-09-25 21:09:08,338 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Global step of best model: 130
9
+ 2025-09-25 21:09:09,039 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Best model saved at epoch 2 with val mean-roc_auc_score: 0.8187
10
+ 2025-09-25 21:09:16,478 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 3/100 | Train Loss: 0.3904 | Val mean-roc_auc_score: 0.7876
11
+ 2025-09-25 21:09:27,427 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 4/100 | Train Loss: 0.3375 | Val mean-roc_auc_score: 0.8201
12
+ 2025-09-25 21:09:27,629 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Global step of best model: 260
13
+ 2025-09-25 21:09:28,284 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Best model saved at epoch 4 with val mean-roc_auc_score: 0.8201
14
+ 2025-09-25 21:09:39,003 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 5/100 | Train Loss: 0.3075 | Val mean-roc_auc_score: 0.8166
15
+ 2025-09-25 21:09:46,794 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 6/100 | Train Loss: 0.2538 | Val mean-roc_auc_score: 0.8180
16
+ 2025-09-25 21:09:57,730 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 7/100 | Train Loss: 0.1864 | Val mean-roc_auc_score: 0.7976
17
+ 2025-09-25 21:10:08,235 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 8/100 | Train Loss: 0.1477 | Val mean-roc_auc_score: 0.7942
18
+ 2025-09-25 21:10:16,595 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 9/100 | Train Loss: 0.1250 | Val mean-roc_auc_score: 0.8211
19
+ 2025-09-25 21:10:16,770 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Global step of best model: 585
20
+ 2025-09-25 21:10:17,441 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Best model saved at epoch 9 with val mean-roc_auc_score: 0.8211
21
+ 2025-09-25 21:10:28,183 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 10/100 | Train Loss: 0.0931 | Val mean-roc_auc_score: 0.8195
22
+ 2025-09-25 21:10:36,106 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 11/100 | Train Loss: 0.1354 | Val mean-roc_auc_score: 0.8275
23
+ 2025-09-25 21:10:36,776 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Global step of best model: 715
24
+ 2025-09-25 21:10:37,539 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Best model saved at epoch 11 with val mean-roc_auc_score: 0.8275
25
+ 2025-09-25 21:10:48,940 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 12/100 | Train Loss: 0.0769 | Val mean-roc_auc_score: 0.8247
26
+ 2025-09-25 21:11:00,263 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 13/100 | Train Loss: 0.0601 | Val mean-roc_auc_score: 0.8195
27
+ 2025-09-25 21:11:08,511 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 14/100 | Train Loss: 0.0430 | Val mean-roc_auc_score: 0.8292
28
+ 2025-09-25 21:11:08,719 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Global step of best model: 910
29
+ 2025-09-25 21:11:09,336 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Best model saved at epoch 14 with val mean-roc_auc_score: 0.8292
30
+ 2025-09-25 21:11:20,001 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 15/100 | Train Loss: 0.0430 | Val mean-roc_auc_score: 0.8232
31
+ 2025-09-25 21:11:32,041 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 16/100 | Train Loss: 0.0369 | Val mean-roc_auc_score: 0.8227
32
+ 2025-09-25 21:11:40,576 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 17/100 | Train Loss: 0.0107 | Val mean-roc_auc_score: 0.8243
33
+ 2025-09-25 21:11:51,600 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 18/100 | Train Loss: 0.0375 | Val mean-roc_auc_score: 0.8201
34
+ 2025-09-25 21:12:00,528 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 19/100 | Train Loss: 0.0435 | Val mean-roc_auc_score: 0.8217
35
+ 2025-09-25 21:12:11,775 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 20/100 | Train Loss: 0.0292 | Val mean-roc_auc_score: 0.8233
36
+ 2025-09-25 21:12:22,599 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 21/100 | Train Loss: 0.0266 | Val mean-roc_auc_score: 0.8191
37
+ 2025-09-25 21:12:31,938 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 22/100 | Train Loss: 0.0435 | Val mean-roc_auc_score: 0.8297
38
+ 2025-09-25 21:12:32,122 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Global step of best model: 1430
39
+ 2025-09-25 21:12:32,795 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Best model saved at epoch 22 with val mean-roc_auc_score: 0.8297
40
+ 2025-09-25 21:12:44,389 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 23/100 | Train Loss: 0.0291 | Val mean-roc_auc_score: 0.8280
41
+ 2025-09-25 21:12:54,920 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 24/100 | Train Loss: 0.0333 | Val mean-roc_auc_score: 0.8241
42
+ 2025-09-25 21:13:06,165 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 25/100 | Train Loss: 0.0312 | Val mean-roc_auc_score: 0.8206
43
+ 2025-09-25 21:13:13,820 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 26/100 | Train Loss: 0.0237 | Val mean-roc_auc_score: 0.8228
44
+ 2025-09-25 21:13:23,839 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 27/100 | Train Loss: 0.0229 | Val mean-roc_auc_score: 0.8194
45
+ 2025-09-25 21:13:34,747 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 28/100 | Train Loss: 0.0307 | Val mean-roc_auc_score: 0.8241
46
+ 2025-09-25 21:13:45,450 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 29/100 | Train Loss: 0.0234 | Val mean-roc_auc_score: 0.8294
47
+ 2025-09-25 21:13:53,335 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 30/100 | Train Loss: 0.0261 | Val mean-roc_auc_score: 0.8297
48
+ 2025-09-25 21:14:05,638 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 31/100 | Train Loss: 0.0197 | Val mean-roc_auc_score: 0.8321
49
+ 2025-09-25 21:14:06,275 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Global step of best model: 2015
50
+ 2025-09-25 21:14:07,110 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Best model saved at epoch 31 with val mean-roc_auc_score: 0.8321
51
+ 2025-09-25 21:14:15,589 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 32/100 | Train Loss: 0.0251 | Val mean-roc_auc_score: 0.8203
52
+ 2025-09-25 21:14:26,857 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 33/100 | Train Loss: 0.0396 | Val mean-roc_auc_score: 0.8318
53
+ 2025-09-25 21:14:37,913 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 34/100 | Train Loss: 0.0281 | Val mean-roc_auc_score: 0.8169
54
+ 2025-09-25 21:14:46,364 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 35/100 | Train Loss: 0.0269 | Val mean-roc_auc_score: 0.8260
55
+ 2025-09-25 21:14:57,433 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 36/100 | Train Loss: 0.0271 | Val mean-roc_auc_score: 0.8298
56
+ 2025-09-25 21:15:08,782 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 37/100 | Train Loss: 0.0148 | Val mean-roc_auc_score: 0.8261
57
+ 2025-09-25 21:15:16,854 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 38/100 | Train Loss: 0.0208 | Val mean-roc_auc_score: 0.8303
58
+ 2025-09-25 21:15:27,885 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 39/100 | Train Loss: 0.0252 | Val mean-roc_auc_score: 0.8286
59
+ 2025-09-25 21:15:36,013 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 40/100 | Train Loss: 0.0168 | Val mean-roc_auc_score: 0.8284
60
+ 2025-09-25 21:15:47,457 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 41/100 | Train Loss: 0.0192 | Val mean-roc_auc_score: 0.8336
61
+ 2025-09-25 21:15:48,185 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Global step of best model: 2665
62
+ 2025-09-25 21:15:48,862 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Best model saved at epoch 41 with val mean-roc_auc_score: 0.8336
63
+ 2025-09-25 21:15:59,431 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 42/100 | Train Loss: 0.0247 | Val mean-roc_auc_score: 0.8282
64
+ 2025-09-25 21:16:07,376 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 43/100 | Train Loss: 0.0196 | Val mean-roc_auc_score: 0.8303
65
+ 2025-09-25 21:16:18,279 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 44/100 | Train Loss: 0.0201 | Val mean-roc_auc_score: 0.8307
66
+ 2025-09-25 21:16:29,000 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 45/100 | Train Loss: 0.0152 | Val mean-roc_auc_score: 0.8310
67
+ 2025-09-25 21:16:37,142 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 46/100 | Train Loss: 0.0169 | Val mean-roc_auc_score: 0.8335
68
+ 2025-09-25 21:16:49,844 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 47/100 | Train Loss: 0.0213 | Val mean-roc_auc_score: 0.8291
69
+ 2025-09-25 21:16:58,013 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 48/100 | Train Loss: 0.0193 | Val mean-roc_auc_score: 0.8317
70
+ 2025-09-25 21:17:09,055 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 49/100 | Train Loss: 0.0185 | Val mean-roc_auc_score: 0.8296
71
+ 2025-09-25 21:17:19,436 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 50/100 | Train Loss: 0.0209 | Val mean-roc_auc_score: 0.8303
72
+ 2025-09-25 21:17:27,693 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 51/100 | Train Loss: 0.0163 | Val mean-roc_auc_score: 0.8311
73
+ 2025-09-25 21:17:38,914 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 52/100 | Train Loss: 0.0180 | Val mean-roc_auc_score: 0.8349
74
+ 2025-09-25 21:17:39,261 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Global step of best model: 3380
75
+ 2025-09-25 21:17:39,923 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Best model saved at epoch 52 with val mean-roc_auc_score: 0.8349
76
+ 2025-09-25 21:17:50,605 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 53/100 | Train Loss: 0.0188 | Val mean-roc_auc_score: 0.8274
77
+ 2025-09-25 21:17:59,035 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 54/100 | Train Loss: 0.0143 | Val mean-roc_auc_score: 0.8288
78
+ 2025-09-25 21:18:10,076 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 55/100 | Train Loss: 0.0167 | Val mean-roc_auc_score: 0.8325
79
+ 2025-09-25 21:18:20,985 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 56/100 | Train Loss: 0.0184 | Val mean-roc_auc_score: 0.8327
80
+ 2025-09-25 21:18:30,083 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 57/100 | Train Loss: 0.0171 | Val mean-roc_auc_score: 0.8319
81
+ 2025-09-25 21:18:40,657 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 58/100 | Train Loss: 0.0171 | Val mean-roc_auc_score: 0.8297
82
+ 2025-09-25 21:18:49,078 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 59/100 | Train Loss: 0.0164 | Val mean-roc_auc_score: 0.8319
83
+ 2025-09-25 21:18:59,842 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 60/100 | Train Loss: 0.0166 | Val mean-roc_auc_score: 0.8296
84
+ 2025-09-25 21:19:11,220 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 61/100 | Train Loss: 0.0171 | Val mean-roc_auc_score: 0.8321
85
+ 2025-09-25 21:19:21,164 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 62/100 | Train Loss: 0.0204 | Val mean-roc_auc_score: 0.8318
86
+ 2025-09-25 21:19:31,680 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 63/100 | Train Loss: 0.0171 | Val mean-roc_auc_score: 0.8308
87
+ 2025-09-25 21:19:42,562 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 64/100 | Train Loss: 0.0197 | Val mean-roc_auc_score: 0.8294
88
+ 2025-09-25 21:19:50,284 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 65/100 | Train Loss: 0.0102 | Val mean-roc_auc_score: 0.8294
89
+ 2025-09-25 21:20:01,804 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 66/100 | Train Loss: 0.0160 | Val mean-roc_auc_score: 0.8289
90
+ 2025-09-25 21:20:11,009 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 67/100 | Train Loss: 0.0182 | Val mean-roc_auc_score: 0.8295
91
+ 2025-09-25 21:20:22,326 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 68/100 | Train Loss: 0.0170 | Val mean-roc_auc_score: 0.8291
92
+ 2025-09-25 21:20:33,375 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 69/100 | Train Loss: 0.0165 | Val mean-roc_auc_score: 0.8283
93
+ 2025-09-25 21:20:42,723 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 70/100 | Train Loss: 0.0164 | Val mean-roc_auc_score: 0.8298
94
+ 2025-09-25 21:20:55,340 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 71/100 | Train Loss: 0.0154 | Val mean-roc_auc_score: 0.8345
95
+ 2025-09-25 21:21:06,125 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 72/100 | Train Loss: 0.0151 | Val mean-roc_auc_score: 0.8327
96
+ 2025-09-25 21:21:17,712 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 73/100 | Train Loss: 0.0150 | Val mean-roc_auc_score: 0.8321
97
+ 2025-09-25 21:21:24,820 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 74/100 | Train Loss: 0.0128 | Val mean-roc_auc_score: 0.8297
98
+ 2025-09-25 21:21:35,412 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 75/100 | Train Loss: 0.0161 | Val mean-roc_auc_score: 0.8292
99
+ 2025-09-25 21:21:46,428 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 76/100 | Train Loss: 0.0270 | Val mean-roc_auc_score: 0.8359
100
+ 2025-09-25 21:21:47,108 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Global step of best model: 4940
101
+ 2025-09-25 21:21:47,842 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Best model saved at epoch 76 with val mean-roc_auc_score: 0.8359
102
+ 2025-09-25 21:22:00,211 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 77/100 | Train Loss: 0.0159 | Val mean-roc_auc_score: 0.8272
103
+ 2025-09-25 21:22:10,084 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 78/100 | Train Loss: 0.0251 | Val mean-roc_auc_score: 0.8271
104
+ 2025-09-25 21:22:17,484 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 79/100 | Train Loss: 0.0223 | Val mean-roc_auc_score: 0.8258
105
+ 2025-09-25 21:22:26,279 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 80/100 | Train Loss: 0.0155 | Val mean-roc_auc_score: 0.8257
106
+ 2025-09-25 21:22:37,307 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 81/100 | Train Loss: 0.0181 | Val mean-roc_auc_score: 0.8280
107
+ 2025-09-25 21:22:48,452 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 82/100 | Train Loss: 0.0159 | Val mean-roc_auc_score: 0.8281
108
+ 2025-09-25 21:22:56,381 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 83/100 | Train Loss: 0.0160 | Val mean-roc_auc_score: 0.8264
109
+ 2025-09-25 21:23:06,861 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 84/100 | Train Loss: 0.0171 | Val mean-roc_auc_score: 0.8280
110
+ 2025-09-25 21:23:17,415 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 85/100 | Train Loss: 0.0162 | Val mean-roc_auc_score: 0.8288
111
+ 2025-09-25 21:23:27,345 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 86/100 | Train Loss: 0.0202 | Val mean-roc_auc_score: 0.8295
112
+ 2025-09-25 21:23:35,121 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 87/100 | Train Loss: 0.0158 | Val mean-roc_auc_score: 0.8298
113
+ 2025-09-25 21:23:45,081 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 88/100 | Train Loss: 0.0186 | Val mean-roc_auc_score: 0.8310
114
+ 2025-09-25 21:23:54,520 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 89/100 | Train Loss: 0.0160 | Val mean-roc_auc_score: 0.8301
115
+ 2025-09-25 21:24:04,967 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 90/100 | Train Loss: 0.0175 | Val mean-roc_auc_score: 0.8310
116
+ 2025-09-25 21:24:15,954 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 91/100 | Train Loss: 0.0148 | Val mean-roc_auc_score: 0.8306
117
+ 2025-09-25 21:24:26,202 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 92/100 | Train Loss: 0.0167 | Val mean-roc_auc_score: 0.8297
118
+ 2025-09-25 21:24:34,429 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 93/100 | Train Loss: 0.0153 | Val mean-roc_auc_score: 0.8308
119
+ 2025-09-25 21:24:43,648 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 94/100 | Train Loss: 0.0168 | Val mean-roc_auc_score: 0.8303
120
+ 2025-09-25 21:24:54,221 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 95/100 | Train Loss: 0.0155 | Val mean-roc_auc_score: 0.8311
121
+ 2025-09-25 21:25:05,097 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 96/100 | Train Loss: 0.0159 | Val mean-roc_auc_score: 0.8309
122
+ 2025-09-25 21:25:13,582 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 97/100 | Train Loss: 0.0219 | Val mean-roc_auc_score: 0.8315
123
+ 2025-09-25 21:25:25,041 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 98/100 | Train Loss: 0.0146 | Val mean-roc_auc_score: 0.8309
124
+ 2025-09-25 21:25:36,009 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 99/100 | Train Loss: 0.0142 | Val mean-roc_auc_score: 0.8304
125
+ 2025-09-25 21:25:44,832 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 100/100 | Train Loss: 0.0157 | Val mean-roc_auc_score: 0.8305
126
+ 2025-09-25 21:25:45,833 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Test mean-roc_auc_score: 0.8022
127
+ 2025-09-25 21:25:46,313 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Starting triplicate run 2 for dataset covid19 at 2025-09-25_21-25-46
128
+ 2025-09-25 21:25:55,624 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 1/100 | Train Loss: 0.5269 | Val mean-roc_auc_score: 0.8063
129
+ 2025-09-25 21:25:55,625 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Global step of best model: 65
130
+ 2025-09-25 21:25:56,622 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Best model saved at epoch 1 with val mean-roc_auc_score: 0.8063
131
+ 2025-09-25 21:26:07,261 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 2/100 | Train Loss: 0.4000 | Val mean-roc_auc_score: 0.8257
132
+ 2025-09-25 21:26:07,477 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Global step of best model: 130
133
+ 2025-09-25 21:26:08,157 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Best model saved at epoch 2 with val mean-roc_auc_score: 0.8257
134
+ 2025-09-25 21:26:18,507 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 3/100 | Train Loss: 0.3712 | Val mean-roc_auc_score: 0.8250
135
+ 2025-09-25 21:26:26,299 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 4/100 | Train Loss: 0.3312 | Val mean-roc_auc_score: 0.8107
136
+ 2025-09-25 21:26:36,442 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 5/100 | Train Loss: 0.3000 | Val mean-roc_auc_score: 0.7988
137
+ 2025-09-25 21:26:47,973 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 6/100 | Train Loss: 0.2712 | Val mean-roc_auc_score: 0.8178
138
+ 2025-09-25 21:26:59,585 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 7/100 | Train Loss: 0.2114 | Val mean-roc_auc_score: 0.8156
139
+ 2025-09-25 21:27:07,714 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 8/100 | Train Loss: 0.2109 | Val mean-roc_auc_score: 0.8135
140
+ 2025-09-25 21:27:18,222 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 9/100 | Train Loss: 0.1337 | Val mean-roc_auc_score: 0.8156
141
+ 2025-09-25 21:27:26,377 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 10/100 | Train Loss: 0.1163 | Val mean-roc_auc_score: 0.7983
142
+ 2025-09-25 21:27:37,627 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 11/100 | Train Loss: 0.1385 | Val mean-roc_auc_score: 0.7896
143
+ 2025-09-25 21:27:49,195 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 12/100 | Train Loss: 0.0817 | Val mean-roc_auc_score: 0.7933
144
+ 2025-09-25 21:27:59,604 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 13/100 | Train Loss: 0.0628 | Val mean-roc_auc_score: 0.7898
145
+ 2025-09-25 21:28:07,223 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 14/100 | Train Loss: 0.0855 | Val mean-roc_auc_score: 0.7846
146
+ 2025-09-25 21:28:18,156 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 15/100 | Train Loss: 0.0466 | Val mean-roc_auc_score: 0.7912
147
+ 2025-09-25 21:28:29,457 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 16/100 | Train Loss: 0.0437 | Val mean-roc_auc_score: 0.8054
148
+ 2025-09-25 21:28:41,155 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 17/100 | Train Loss: 0.0770 | Val mean-roc_auc_score: 0.8137
149
+ 2025-09-25 21:28:49,129 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 18/100 | Train Loss: 0.0387 | Val mean-roc_auc_score: 0.7952
150
+ 2025-09-25 21:29:00,060 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 19/100 | Train Loss: 0.0420 | Val mean-roc_auc_score: 0.7963
151
+ 2025-09-25 21:29:11,402 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 20/100 | Train Loss: 0.0327 | Val mean-roc_auc_score: 0.7918
152
+ 2025-09-25 21:29:19,643 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 21/100 | Train Loss: 0.0373 | Val mean-roc_auc_score: 0.7971
153
+ 2025-09-25 21:29:31,024 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 22/100 | Train Loss: 0.0305 | Val mean-roc_auc_score: 0.7892
154
+ 2025-09-25 21:29:41,306 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 23/100 | Train Loss: 0.0263 | Val mean-roc_auc_score: 0.8003
155
+ 2025-09-25 21:29:49,698 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 24/100 | Train Loss: 0.0258 | Val mean-roc_auc_score: 0.8092
156
+ 2025-09-25 21:30:01,093 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 25/100 | Train Loss: 0.0419 | Val mean-roc_auc_score: 0.7817
157
+ 2025-09-25 21:30:11,566 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 26/100 | Train Loss: 0.0279 | Val mean-roc_auc_score: 0.7947
158
+ 2025-09-25 21:30:22,456 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 27/100 | Train Loss: 0.0213 | Val mean-roc_auc_score: 0.7983
159
+ 2025-09-25 21:30:28,956 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 28/100 | Train Loss: 0.0367 | Val mean-roc_auc_score: 0.8052
160
+ 2025-09-25 21:30:39,371 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 29/100 | Train Loss: 0.0248 | Val mean-roc_auc_score: 0.7988
161
+ 2025-09-25 21:30:51,423 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 30/100 | Train Loss: 0.0334 | Val mean-roc_auc_score: 0.7812
162
+ 2025-09-25 21:31:03,388 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 31/100 | Train Loss: 0.0177 | Val mean-roc_auc_score: 0.8032
163
+ 2025-09-25 21:31:12,169 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 32/100 | Train Loss: 0.0266 | Val mean-roc_auc_score: 0.7971
164
+ 2025-09-25 21:31:22,955 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 33/100 | Train Loss: 0.0259 | Val mean-roc_auc_score: 0.8035
165
+ 2025-09-25 21:31:31,219 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 34/100 | Train Loss: 0.0229 | Val mean-roc_auc_score: 0.8015
166
+ 2025-09-25 21:31:42,096 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 35/100 | Train Loss: 0.0189 | Val mean-roc_auc_score: 0.8070
167
+ 2025-09-25 21:31:52,872 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 36/100 | Train Loss: 0.0192 | Val mean-roc_auc_score: 0.8022
168
+ 2025-09-25 21:32:01,915 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 37/100 | Train Loss: 0.0192 | Val mean-roc_auc_score: 0.8047
169
+ 2025-09-25 21:32:12,843 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 38/100 | Train Loss: 0.0177 | Val mean-roc_auc_score: 0.8057
170
+ 2025-09-25 21:32:23,707 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 39/100 | Train Loss: 0.0223 | Val mean-roc_auc_score: 0.8037
171
+ 2025-09-25 21:32:31,894 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 40/100 | Train Loss: 0.0185 | Val mean-roc_auc_score: 0.8041
172
+ 2025-09-25 21:32:42,928 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 41/100 | Train Loss: 0.0178 | Val mean-roc_auc_score: 0.8081
173
+ 2025-09-25 21:32:53,984 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 42/100 | Train Loss: 0.0264 | Val mean-roc_auc_score: 0.7870
174
+ 2025-09-25 21:33:02,013 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 43/100 | Train Loss: 0.0224 | Val mean-roc_auc_score: 0.7945
175
+ 2025-09-25 21:33:11,913 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 44/100 | Train Loss: 0.0208 | Val mean-roc_auc_score: 0.7982
176
+ 2025-09-25 21:33:22,880 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 45/100 | Train Loss: 0.0134 | Val mean-roc_auc_score: 0.7979
177
+ 2025-09-25 21:33:34,015 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 46/100 | Train Loss: 0.0196 | Val mean-roc_auc_score: 0.7973
178
+ 2025-09-25 21:33:42,804 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 47/100 | Train Loss: 0.0160 | Val mean-roc_auc_score: 0.8003
179
+ 2025-09-25 21:33:52,278 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 48/100 | Train Loss: 0.0183 | Val mean-roc_auc_score: 0.8006
180
+ 2025-09-25 21:34:03,010 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 49/100 | Train Loss: 0.0190 | Val mean-roc_auc_score: 0.7980
181
+ 2025-09-25 21:34:13,674 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 50/100 | Train Loss: 0.0186 | Val mean-roc_auc_score: 0.7980
182
+ 2025-09-25 21:34:22,514 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 51/100 | Train Loss: 0.0208 | Val mean-roc_auc_score: 0.8001
183
+ 2025-09-25 21:34:34,841 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 52/100 | Train Loss: 0.0156 | Val mean-roc_auc_score: 0.8007
184
+ 2025-09-25 21:34:43,403 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 53/100 | Train Loss: 0.0189 | Val mean-roc_auc_score: 0.7994
185
+ 2025-09-25 21:34:54,936 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 54/100 | Train Loss: 0.0152 | Val mean-roc_auc_score: 0.8009
186
+ 2025-09-25 21:35:05,916 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 55/100 | Train Loss: 0.0148 | Val mean-roc_auc_score: 0.7999
187
+ 2025-09-25 21:35:13,954 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 56/100 | Train Loss: 0.0143 | Val mean-roc_auc_score: 0.7991
188
+ 2025-09-25 21:35:25,576 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 57/100 | Train Loss: 0.0322 | Val mean-roc_auc_score: 0.8012
189
+ 2025-09-25 21:35:36,271 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 58/100 | Train Loss: 0.0250 | Val mean-roc_auc_score: 0.8028
190
+ 2025-09-25 21:35:44,615 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 59/100 | Train Loss: 0.0129 | Val mean-roc_auc_score: 0.8020
191
+ 2025-09-25 21:35:55,198 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 60/100 | Train Loss: 0.0153 | Val mean-roc_auc_score: 0.8018
192
+ 2025-09-25 21:36:06,454 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 61/100 | Train Loss: 0.0155 | Val mean-roc_auc_score: 0.8028
193
+ 2025-09-25 21:36:18,969 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 62/100 | Train Loss: 0.0173 | Val mean-roc_auc_score: 0.8021
194
+ 2025-09-25 21:36:25,929 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 63/100 | Train Loss: 0.0151 | Val mean-roc_auc_score: 0.8021
195
+ 2025-09-25 21:36:35,922 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 64/100 | Train Loss: 0.0176 | Val mean-roc_auc_score: 0.8026
196
+ 2025-09-25 21:36:46,619 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 65/100 | Train Loss: 0.0238 | Val mean-roc_auc_score: 0.8006
197
+ 2025-09-25 21:36:57,134 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 66/100 | Train Loss: 0.0154 | Val mean-roc_auc_score: 0.8004
198
+ 2025-09-25 21:37:05,651 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 67/100 | Train Loss: 0.0180 | Val mean-roc_auc_score: 0.8001
199
+ 2025-09-25 21:37:15,927 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 68/100 | Train Loss: 0.0228 | Val mean-roc_auc_score: 0.8018
200
+ 2025-09-25 21:37:26,747 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 69/100 | Train Loss: 0.0175 | Val mean-roc_auc_score: 0.8011
201
+ 2025-09-25 21:37:34,598 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 70/100 | Train Loss: 0.0156 | Val mean-roc_auc_score: 0.8032
202
+ 2025-09-25 21:37:45,984 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 71/100 | Train Loss: 0.0094 | Val mean-roc_auc_score: 0.8039
203
+ 2025-09-25 21:37:54,802 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 72/100 | Train Loss: 0.0145 | Val mean-roc_auc_score: 0.8027
204
+ 2025-09-25 21:38:06,018 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 73/100 | Train Loss: 0.0153 | Val mean-roc_auc_score: 0.8028
205
+ 2025-09-25 21:38:16,342 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 74/100 | Train Loss: 0.0110 | Val mean-roc_auc_score: 0.8020
206
+ 2025-09-25 21:38:26,109 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 75/100 | Train Loss: 0.0167 | Val mean-roc_auc_score: 0.8010
207
+ 2025-09-25 21:38:37,003 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 76/100 | Train Loss: 0.0176 | Val mean-roc_auc_score: 0.8025
208
+ 2025-09-25 21:38:45,758 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 77/100 | Train Loss: 0.0178 | Val mean-roc_auc_score: 0.8021
209
+ 2025-09-25 21:38:56,133 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 78/100 | Train Loss: 0.0162 | Val mean-roc_auc_score: 0.8026
210
+ 2025-09-25 21:39:08,465 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 79/100 | Train Loss: 0.0108 | Val mean-roc_auc_score: 0.8025
211
+ 2025-09-25 21:39:17,632 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 80/100 | Train Loss: 0.0149 | Val mean-roc_auc_score: 0.8029
212
+ 2025-09-25 21:39:30,308 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 81/100 | Train Loss: 0.0144 | Val mean-roc_auc_score: 0.8032
213
+ 2025-09-25 21:39:43,037 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 82/100 | Train Loss: 0.0112 | Val mean-roc_auc_score: 0.8025
214
+ 2025-09-25 21:39:52,660 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 83/100 | Train Loss: 0.0157 | Val mean-roc_auc_score: 0.8018
215
+ 2025-09-25 21:40:04,913 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 84/100 | Train Loss: 0.0171 | Val mean-roc_auc_score: 0.8020
216
+ 2025-09-25 21:40:14,742 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 85/100 | Train Loss: 0.0230 | Val mean-roc_auc_score: 0.8006
217
+ 2025-09-25 21:40:27,042 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 86/100 | Train Loss: 0.0154 | Val mean-roc_auc_score: 0.7996
218
+ 2025-09-25 21:40:37,811 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 87/100 | Train Loss: 0.0160 | Val mean-roc_auc_score: 0.7996
219
+ 2025-09-25 21:40:50,406 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 88/100 | Train Loss: 0.0196 | Val mean-roc_auc_score: 0.8010
220
+ 2025-09-25 21:41:01,709 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 89/100 | Train Loss: 0.0166 | Val mean-roc_auc_score: 0.8023
221
+ 2025-09-25 21:41:11,299 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 90/100 | Train Loss: 0.0164 | Val mean-roc_auc_score: 0.8028
222
+ 2025-09-25 21:41:24,087 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 91/100 | Train Loss: 0.0156 | Val mean-roc_auc_score: 0.8032
223
+ 2025-09-25 21:41:35,657 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 92/100 | Train Loss: 0.0153 | Val mean-roc_auc_score: 0.8004
224
+ 2025-09-25 21:41:46,562 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 93/100 | Train Loss: 0.0171 | Val mean-roc_auc_score: 0.8025
225
+ 2025-09-25 21:41:59,596 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 94/100 | Train Loss: 0.0157 | Val mean-roc_auc_score: 0.8042
226
+ 2025-09-25 21:42:11,002 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 95/100 | Train Loss: 0.0162 | Val mean-roc_auc_score: 0.8032
227
+ 2025-09-25 21:42:22,929 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 96/100 | Train Loss: 0.0175 | Val mean-roc_auc_score: 0.8026
228
+ 2025-09-25 21:42:31,976 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 97/100 | Train Loss: 0.0054 | Val mean-roc_auc_score: 0.8026
229
+ 2025-09-25 21:42:42,344 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 98/100 | Train Loss: 0.0147 | Val mean-roc_auc_score: 0.8035
230
+ 2025-09-25 21:42:53,400 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 99/100 | Train Loss: 0.0162 | Val mean-roc_auc_score: 0.8007
231
+ 2025-09-25 21:43:00,800 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 100/100 | Train Loss: 0.0166 | Val mean-roc_auc_score: 0.7996
232
+ 2025-09-25 21:43:01,743 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Test mean-roc_auc_score: 0.8227
233
+ 2025-09-25 21:43:02,221 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Starting triplicate run 3 for dataset covid19 at 2025-09-25_21-43-02
234
+ 2025-09-25 21:43:11,392 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 1/100 | Train Loss: 0.5769 | Val mean-roc_auc_score: 0.7786
235
+ 2025-09-25 21:43:11,392 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Global step of best model: 65
236
+ 2025-09-25 21:43:12,607 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Best model saved at epoch 1 with val mean-roc_auc_score: 0.7786
237
+ 2025-09-25 21:43:21,319 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 2/100 | Train Loss: 0.4542 | Val mean-roc_auc_score: 0.7901
238
+ 2025-09-25 21:43:21,520 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Global step of best model: 130
239
+ 2025-09-25 21:43:22,210 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Best model saved at epoch 2 with val mean-roc_auc_score: 0.7901
240
+ 2025-09-25 21:43:32,784 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 3/100 | Train Loss: 0.3885 | Val mean-roc_auc_score: 0.7975
241
+ 2025-09-25 21:43:33,072 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Global step of best model: 195
242
+ 2025-09-25 21:43:33,763 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Best model saved at epoch 3 with val mean-roc_auc_score: 0.7975
243
+ 2025-09-25 21:43:44,117 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 4/100 | Train Loss: 0.3417 | Val mean-roc_auc_score: 0.8061
244
+ 2025-09-25 21:43:44,333 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Global step of best model: 260
245
+ 2025-09-25 21:43:45,058 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Best model saved at epoch 4 with val mean-roc_auc_score: 0.8061
246
+ 2025-09-25 21:43:53,304 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 5/100 | Train Loss: 0.3475 | Val mean-roc_auc_score: 0.8134
247
+ 2025-09-25 21:43:53,536 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Global step of best model: 325
248
+ 2025-09-25 21:43:54,210 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Best model saved at epoch 5 with val mean-roc_auc_score: 0.8134
249
+ 2025-09-25 21:44:05,048 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 6/100 | Train Loss: 0.2394 | Val mean-roc_auc_score: 0.7834
250
+ 2025-09-25 21:44:17,449 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 7/100 | Train Loss: 0.1966 | Val mean-roc_auc_score: 0.8204
251
+ 2025-09-25 21:44:17,232 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Global step of best model: 455
252
+ 2025-09-25 21:44:15,789 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Best model saved at epoch 7 with val mean-roc_auc_score: 0.8204
253
+ 2025-09-25 21:44:27,784 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 8/100 | Train Loss: 0.1641 | Val mean-roc_auc_score: 0.8137
254
+ 2025-09-25 21:44:39,545 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 9/100 | Train Loss: 0.1058 | Val mean-roc_auc_score: 0.8010
255
+ 2025-09-25 21:44:48,639 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 10/100 | Train Loss: 0.0981 | Val mean-roc_auc_score: 0.8038
256
+ 2025-09-25 21:44:59,964 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 11/100 | Train Loss: 0.0760 | Val mean-roc_auc_score: 0.8049
257
+ 2025-09-25 21:45:11,189 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 12/100 | Train Loss: 0.0620 | Val mean-roc_auc_score: 0.7920
258
+ 2025-09-25 21:45:19,056 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 13/100 | Train Loss: 0.0750 | Val mean-roc_auc_score: 0.8003
259
+ 2025-09-25 21:45:29,441 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 14/100 | Train Loss: 0.0578 | Val mean-roc_auc_score: 0.7911
260
+ 2025-09-25 21:45:37,364 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 15/100 | Train Loss: 0.0452 | Val mean-roc_auc_score: 0.8053
261
+ 2025-09-25 21:45:49,242 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 16/100 | Train Loss: 0.0398 | Val mean-roc_auc_score: 0.8019
262
+ 2025-09-25 21:46:00,838 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 17/100 | Train Loss: 0.0410 | Val mean-roc_auc_score: 0.7986
263
+ 2025-09-25 21:46:09,183 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 18/100 | Train Loss: 0.0447 | Val mean-roc_auc_score: 0.8032
264
+ 2025-09-25 21:46:20,489 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 19/100 | Train Loss: 0.0393 | Val mean-roc_auc_score: 0.7959
265
+ 2025-09-25 21:46:31,716 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 20/100 | Train Loss: 0.0440 | Val mean-roc_auc_score: 0.8067
266
+ 2025-09-25 21:46:40,235 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 21/100 | Train Loss: 0.0406 | Val mean-roc_auc_score: 0.8079
267
+ 2025-09-25 21:46:51,314 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 22/100 | Train Loss: 0.0275 | Val mean-roc_auc_score: 0.8023
268
+ 2025-09-25 21:46:59,547 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 23/100 | Train Loss: 0.0230 | Val mean-roc_auc_score: 0.8008
269
+ 2025-09-25 21:47:11,119 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 24/100 | Train Loss: 0.0271 | Val mean-roc_auc_score: 0.8027
270
+ 2025-09-25 21:47:22,250 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 25/100 | Train Loss: 0.0284 | Val mean-roc_auc_score: 0.8001
271
+ 2025-09-25 21:47:30,704 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 26/100 | Train Loss: 0.0268 | Val mean-roc_auc_score: 0.8039
272
+ 2025-09-25 21:47:41,866 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 27/100 | Train Loss: 0.0295 | Val mean-roc_auc_score: 0.8127
273
+ 2025-09-25 21:47:52,467 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 28/100 | Train Loss: 0.0198 | Val mean-roc_auc_score: 0.8123
274
+ 2025-09-25 21:48:00,658 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 29/100 | Train Loss: 0.0257 | Val mean-roc_auc_score: 0.8008
275
+ 2025-09-25 21:48:11,758 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 30/100 | Train Loss: 0.0230 | Val mean-roc_auc_score: 0.8073
276
+ 2025-09-25 21:48:20,900 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 31/100 | Train Loss: 0.0109 | Val mean-roc_auc_score: 0.8058
277
+ 2025-09-25 21:48:32,822 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 32/100 | Train Loss: 0.0206 | Val mean-roc_auc_score: 0.8072
278
+ 2025-09-25 21:48:45,361 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 33/100 | Train Loss: 0.0210 | Val mean-roc_auc_score: 0.8066
279
+ 2025-09-25 21:48:54,940 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 34/100 | Train Loss: 0.0185 | Val mean-roc_auc_score: 0.8090
280
+ 2025-09-25 21:49:07,330 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 35/100 | Train Loss: 0.0213 | Val mean-roc_auc_score: 0.8053
281
+ 2025-09-25 21:49:16,880 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 36/100 | Train Loss: 0.0225 | Val mean-roc_auc_score: 0.8063
282
+ 2025-09-25 21:49:28,015 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 37/100 | Train Loss: 0.0443 | Val mean-roc_auc_score: 0.8049
283
+ 2025-09-25 21:49:38,455 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 38/100 | Train Loss: 0.0204 | Val mean-roc_auc_score: 0.8083
284
+ 2025-09-25 21:49:46,288 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 39/100 | Train Loss: 0.0235 | Val mean-roc_auc_score: 0.8071
285
+ 2025-09-25 21:49:57,130 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 40/100 | Train Loss: 0.0178 | Val mean-roc_auc_score: 0.8066
286
+ 2025-09-25 21:50:08,165 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 41/100 | Train Loss: 0.0180 | Val mean-roc_auc_score: 0.8096
287
+ 2025-09-25 21:50:16,690 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 42/100 | Train Loss: 0.0166 | Val mean-roc_auc_score: 0.8154
288
+ 2025-09-25 21:50:27,051 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 43/100 | Train Loss: 0.0184 | Val mean-roc_auc_score: 0.8113
289
+ 2025-09-25 21:50:37,856 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 44/100 | Train Loss: 0.0193 | Val mean-roc_auc_score: 0.8096
290
+ 2025-09-25 21:50:48,892 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 45/100 | Train Loss: 0.0192 | Val mean-roc_auc_score: 0.8154
291
+ 2025-09-25 21:50:56,281 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 46/100 | Train Loss: 0.0195 | Val mean-roc_auc_score: 0.8140
292
+ 2025-09-25 21:51:08,295 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 47/100 | Train Loss: 0.0196 | Val mean-roc_auc_score: 0.8135
293
+ 2025-09-25 21:51:19,491 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 48/100 | Train Loss: 0.0146 | Val mean-roc_auc_score: 0.8099
294
+ 2025-09-25 21:51:30,448 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 49/100 | Train Loss: 0.0215 | Val mean-roc_auc_score: 0.8117
295
+ 2025-09-25 21:51:38,849 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 50/100 | Train Loss: 0.0141 | Val mean-roc_auc_score: 0.8066
296
+ 2025-09-25 21:51:50,239 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 51/100 | Train Loss: 0.0344 | Val mean-roc_auc_score: 0.8097
297
+ 2025-09-25 21:51:59,386 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 52/100 | Train Loss: 0.0185 | Val mean-roc_auc_score: 0.8082
298
+ 2025-09-25 21:52:12,059 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 53/100 | Train Loss: 0.0199 | Val mean-roc_auc_score: 0.8077
299
+ 2025-09-25 21:52:24,111 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 54/100 | Train Loss: 0.0250 | Val mean-roc_auc_score: 0.8098
300
+ 2025-09-25 21:52:32,275 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 55/100 | Train Loss: 0.0160 | Val mean-roc_auc_score: 0.8115
301
+ 2025-09-25 21:52:43,361 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 56/100 | Train Loss: 0.0181 | Val mean-roc_auc_score: 0.8085
302
+ 2025-09-25 21:52:54,507 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 57/100 | Train Loss: 0.0137 | Val mean-roc_auc_score: 0.8101
303
+ 2025-09-25 21:53:04,085 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 58/100 | Train Loss: 0.0310 | Val mean-roc_auc_score: 0.8029
304
+ 2025-09-25 21:53:15,584 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 59/100 | Train Loss: 0.1009 | Val mean-roc_auc_score: 0.8005
305
+ 2025-09-25 21:53:24,292 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 60/100 | Train Loss: 0.0808 | Val mean-roc_auc_score: 0.8062
306
+ 2025-09-25 21:53:35,134 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 61/100 | Train Loss: 0.0502 | Val mean-roc_auc_score: 0.7902
307
+ 2025-09-25 21:53:47,757 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 62/100 | Train Loss: 0.0271 | Val mean-roc_auc_score: 0.7970
308
+ 2025-09-25 21:53:55,852 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 63/100 | Train Loss: 0.0238 | Val mean-roc_auc_score: 0.7948
309
+ 2025-09-25 21:54:07,056 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 64/100 | Train Loss: 0.0259 | Val mean-roc_auc_score: 0.7962
310
+ 2025-09-25 21:54:15,419 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 65/100 | Train Loss: 0.0196 | Val mean-roc_auc_score: 0.7976
311
+ 2025-09-25 21:54:26,478 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 66/100 | Train Loss: 0.0206 | Val mean-roc_auc_score: 0.7903
312
+ 2025-09-25 21:54:38,111 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 67/100 | Train Loss: 0.0209 | Val mean-roc_auc_score: 0.7905
313
+ 2025-09-25 21:54:46,054 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 68/100 | Train Loss: 0.0260 | Val mean-roc_auc_score: 0.7929
314
+ 2025-09-25 21:54:57,436 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 69/100 | Train Loss: 0.0186 | Val mean-roc_auc_score: 0.7928
315
+ 2025-09-25 21:55:08,296 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 70/100 | Train Loss: 0.0233 | Val mean-roc_auc_score: 0.7929
316
+ 2025-09-25 21:55:16,608 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 71/100 | Train Loss: 0.0319 | Val mean-roc_auc_score: 0.7943
317
+ 2025-09-25 21:55:28,097 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 72/100 | Train Loss: 0.0209 | Val mean-roc_auc_score: 0.7940
318
+ 2025-09-25 21:55:36,964 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 73/100 | Train Loss: 0.0194 | Val mean-roc_auc_score: 0.7948
319
+ 2025-09-25 21:55:47,443 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 74/100 | Train Loss: 0.0220 | Val mean-roc_auc_score: 0.7945
320
+ 2025-09-25 21:55:58,224 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 75/100 | Train Loss: 0.0196 | Val mean-roc_auc_score: 0.7975
321
+ 2025-09-25 21:56:05,834 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 76/100 | Train Loss: 0.0187 | Val mean-roc_auc_score: 0.7973
322
+ 2025-09-25 21:56:18,765 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 77/100 | Train Loss: 0.0037 | Val mean-roc_auc_score: 0.7964
323
+ 2025-09-25 21:56:29,409 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 78/100 | Train Loss: 0.0184 | Val mean-roc_auc_score: 0.7972
324
+ 2025-09-25 21:56:37,426 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 79/100 | Train Loss: 0.0175 | Val mean-roc_auc_score: 0.7977
325
+ 2025-09-25 21:56:48,427 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 80/100 | Train Loss: 0.0181 | Val mean-roc_auc_score: 0.7968
326
+ 2025-09-25 21:56:59,095 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 81/100 | Train Loss: 0.0194 | Val mean-roc_auc_score: 0.7963
327
+ 2025-09-25 21:57:07,710 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 82/100 | Train Loss: 0.0238 | Val mean-roc_auc_score: 0.7962
328
+ 2025-09-25 21:57:20,228 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 83/100 | Train Loss: 0.0201 | Val mean-roc_auc_score: 0.7974
329
+ 2025-09-25 21:57:29,906 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 84/100 | Train Loss: 0.0193 | Val mean-roc_auc_score: 0.7968
330
+ 2025-09-25 21:57:40,646 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 85/100 | Train Loss: 0.0134 | Val mean-roc_auc_score: 0.7968
331
+ 2025-09-25 21:57:51,067 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 86/100 | Train Loss: 0.0202 | Val mean-roc_auc_score: 0.7963
332
+ 2025-09-25 21:57:59,437 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 87/100 | Train Loss: 0.0193 | Val mean-roc_auc_score: 0.7953
333
+ 2025-09-25 21:58:10,727 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 88/100 | Train Loss: 0.0231 | Val mean-roc_auc_score: 0.7972
334
+ 2025-09-25 21:58:21,187 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 89/100 | Train Loss: 0.0174 | Val mean-roc_auc_score: 0.8035
335
+ 2025-09-25 21:58:29,123 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 90/100 | Train Loss: 0.0180 | Val mean-roc_auc_score: 0.8013
336
+ 2025-09-25 21:58:39,927 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 91/100 | Train Loss: 0.0160 | Val mean-roc_auc_score: 0.8013
337
+ 2025-09-25 21:58:48,565 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 92/100 | Train Loss: 0.0191 | Val mean-roc_auc_score: 0.8022
338
+ 2025-09-25 21:59:00,862 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 93/100 | Train Loss: 0.0187 | Val mean-roc_auc_score: 0.8024
339
+ 2025-09-25 21:59:11,626 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 94/100 | Train Loss: 0.0118 | Val mean-roc_auc_score: 0.8031
340
+ 2025-09-25 21:59:18,473 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 95/100 | Train Loss: 0.0167 | Val mean-roc_auc_score: 0.8027
341
+ 2025-09-25 21:59:27,069 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 96/100 | Train Loss: 0.0170 | Val mean-roc_auc_score: 0.8026
342
+ 2025-09-25 21:59:34,543 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 97/100 | Train Loss: 0.0101 | Val mean-roc_auc_score: 0.8017
343
+ 2025-09-25 21:59:40,997 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 98/100 | Train Loss: 0.0183 | Val mean-roc_auc_score: 0.8011
344
+ 2025-09-25 21:59:44,837 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 99/100 | Train Loss: 0.0152 | Val mean-roc_auc_score: 0.8025
345
+ 2025-09-25 21:59:51,074 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 100/100 | Train Loss: 0.0169 | Val mean-roc_auc_score: 0.8026
346
+ 2025-09-25 21:59:51,775 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Test mean-roc_auc_score: 0.7838
347
+ 2025-09-25 21:59:52,281 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Final Triplicate Test Results — Avg mean-roc_auc_score: 0.8029, Std Dev: 0.0159
logs_modchembert_regression_ModChemBERT-MLM-DAPT/modchembert_deepchem_splits_run_adme_microsom_stab_h_epochs100_batch_size32_20250926_053825.log ADDED
@@ -0,0 +1,351 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ 2025-09-26 05:38:25,676 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Running benchmark for dataset: adme_microsom_stab_h
2
+ 2025-09-26 05:38:25,676 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - dataset: adme_microsom_stab_h, tasks: ['y'], epochs: 100, learning rate: 3e-05, transform: True
3
+ 2025-09-26 05:38:25,689 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Starting triplicate run 1 for dataset adme_microsom_stab_h at 2025-09-26_05-38-25
4
+ 2025-09-26 05:38:38,671 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 1/100 | Train Loss: 1.0185 | Val rms_score: 0.4453
5
+ 2025-09-26 05:38:38,671 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Global step of best model: 54
6
+ 2025-09-26 05:38:39,325 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Best model saved at epoch 1 with val rms_score: 0.4453
7
+ 2025-09-26 05:38:53,323 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 2/100 | Train Loss: 0.6211 | Val rms_score: 0.4147
8
+ 2025-09-26 05:38:53,514 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Global step of best model: 108
9
+ 2025-09-26 05:38:54,186 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Best model saved at epoch 2 with val rms_score: 0.4147
10
+ 2025-09-26 05:39:07,297 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 3/100 | Train Loss: 0.6111 | Val rms_score: 0.4274
11
+ 2025-09-26 05:39:17,835 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 4/100 | Train Loss: 0.5195 | Val rms_score: 0.3943
12
+ 2025-09-26 05:39:18,087 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Global step of best model: 216
13
+ 2025-09-26 05:39:19,015 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Best model saved at epoch 4 with val rms_score: 0.3943
14
+ 2025-09-26 05:39:31,651 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 5/100 | Train Loss: 0.4514 | Val rms_score: 0.4183
15
+ 2025-09-26 05:39:41,536 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 6/100 | Train Loss: 0.3411 | Val rms_score: 0.4263
16
+ 2025-09-26 05:39:54,114 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 7/100 | Train Loss: 0.2789 | Val rms_score: 0.4123
17
+ 2025-09-26 05:40:06,411 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 8/100 | Train Loss: 0.2285 | Val rms_score: 0.3980
18
+ 2025-09-26 05:40:16,901 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 9/100 | Train Loss: 0.1794 | Val rms_score: 0.3980
19
+ 2025-09-26 05:40:28,823 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 10/100 | Train Loss: 0.1508 | Val rms_score: 0.4106
20
+ 2025-09-26 05:40:39,369 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 11/100 | Train Loss: 0.1250 | Val rms_score: 0.4105
21
+ 2025-09-26 05:40:51,945 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 12/100 | Train Loss: 0.0977 | Val rms_score: 0.4183
22
+ 2025-09-26 05:41:03,554 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 13/100 | Train Loss: 0.1113 | Val rms_score: 0.3922
23
+ 2025-09-26 05:41:03,720 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Global step of best model: 702
24
+ 2025-09-26 05:41:04,364 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Best model saved at epoch 13 with val rms_score: 0.3922
25
+ 2025-09-26 05:41:14,477 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 14/100 | Train Loss: 0.0747 | Val rms_score: 0.4131
26
+ 2025-09-26 05:41:26,990 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 15/100 | Train Loss: 0.0715 | Val rms_score: 0.3933
27
+ 2025-09-26 05:41:37,630 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 16/100 | Train Loss: 0.0666 | Val rms_score: 0.4014
28
+ 2025-09-26 05:41:50,565 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 17/100 | Train Loss: 0.0586 | Val rms_score: 0.3986
29
+ 2025-09-26 05:42:03,223 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 18/100 | Train Loss: 0.0544 | Val rms_score: 0.3975
30
+ 2025-09-26 05:42:14,345 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 19/100 | Train Loss: 0.0529 | Val rms_score: 0.3953
31
+ 2025-09-26 05:42:26,328 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 20/100 | Train Loss: 0.0506 | Val rms_score: 0.3981
32
+ 2025-09-26 05:42:36,134 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 21/100 | Train Loss: 0.0478 | Val rms_score: 0.4008
33
+ 2025-09-26 05:42:49,491 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 22/100 | Train Loss: 0.0425 | Val rms_score: 0.3988
34
+ 2025-09-26 05:43:02,356 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 23/100 | Train Loss: 0.0400 | Val rms_score: 0.4039
35
+ 2025-09-26 05:43:12,924 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 24/100 | Train Loss: 0.0385 | Val rms_score: 0.3926
36
+ 2025-09-26 05:43:25,899 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 25/100 | Train Loss: 0.0428 | Val rms_score: 0.3934
37
+ 2025-09-26 05:43:36,381 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 26/100 | Train Loss: 0.0347 | Val rms_score: 0.3944
38
+ 2025-09-26 05:43:49,505 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 27/100 | Train Loss: 0.0373 | Val rms_score: 0.3958
39
+ 2025-09-26 05:44:02,087 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 28/100 | Train Loss: 0.0386 | Val rms_score: 0.3939
40
+ 2025-09-26 05:44:12,518 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 29/100 | Train Loss: 0.0349 | Val rms_score: 0.3978
41
+ 2025-09-26 05:44:25,310 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 30/100 | Train Loss: 0.0375 | Val rms_score: 0.3970
42
+ 2025-09-26 05:44:36,144 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 31/100 | Train Loss: 0.0341 | Val rms_score: 0.3963
43
+ 2025-09-26 05:44:50,319 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 32/100 | Train Loss: 0.0318 | Val rms_score: 0.3964
44
+ 2025-09-26 05:45:03,841 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 33/100 | Train Loss: 0.0340 | Val rms_score: 0.3984
45
+ 2025-09-26 05:45:14,817 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 34/100 | Train Loss: 0.0323 | Val rms_score: 0.4001
46
+ 2025-09-26 05:45:28,282 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 35/100 | Train Loss: 0.0314 | Val rms_score: 0.3970
47
+ 2025-09-26 05:45:39,056 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 36/100 | Train Loss: 0.0320 | Val rms_score: 0.3960
48
+ 2025-09-26 05:45:53,240 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 37/100 | Train Loss: 0.0304 | Val rms_score: 0.4018
49
+ 2025-09-26 05:46:05,123 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 38/100 | Train Loss: 0.0294 | Val rms_score: 0.4006
50
+ 2025-09-26 05:46:18,205 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 39/100 | Train Loss: 0.0304 | Val rms_score: 0.4003
51
+ 2025-09-26 05:46:30,936 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 40/100 | Train Loss: 0.0273 | Val rms_score: 0.3966
52
+ 2025-09-26 05:46:42,108 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 41/100 | Train Loss: 0.0300 | Val rms_score: 0.4026
53
+ 2025-09-26 05:46:56,491 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 42/100 | Train Loss: 0.0284 | Val rms_score: 0.4027
54
+ 2025-09-26 05:47:07,911 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 43/100 | Train Loss: 0.0293 | Val rms_score: 0.3943
55
+ 2025-09-26 05:47:21,930 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 44/100 | Train Loss: 0.0278 | Val rms_score: 0.3960
56
+ 2025-09-26 05:47:33,541 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 45/100 | Train Loss: 0.0276 | Val rms_score: 0.3868
57
+ 2025-09-26 05:47:33,735 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Global step of best model: 2430
58
+ 2025-09-26 05:47:34,400 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Best model saved at epoch 45 with val rms_score: 0.3868
59
+ 2025-09-26 05:47:47,737 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 46/100 | Train Loss: 0.0253 | Val rms_score: 0.3976
60
+ 2025-09-26 05:48:01,607 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 47/100 | Train Loss: 0.0267 | Val rms_score: 0.3895
61
+ 2025-09-26 05:48:12,284 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 48/100 | Train Loss: 0.0262 | Val rms_score: 0.3994
62
+ 2025-09-26 05:48:26,956 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 49/100 | Train Loss: 0.0255 | Val rms_score: 0.3971
63
+ 2025-09-26 05:48:37,081 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 50/100 | Train Loss: 0.0229 | Val rms_score: 0.3918
64
+ 2025-09-26 05:48:48,615 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 51/100 | Train Loss: 0.0240 | Val rms_score: 0.3952
65
+ 2025-09-26 05:49:00,980 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 52/100 | Train Loss: 0.0198 | Val rms_score: 0.3970
66
+ 2025-09-26 05:49:11,041 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 53/100 | Train Loss: 0.0237 | Val rms_score: 0.3967
67
+ 2025-09-26 05:49:23,277 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 54/100 | Train Loss: 0.0239 | Val rms_score: 0.3995
68
+ 2025-09-26 05:49:33,173 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 55/100 | Train Loss: 0.0247 | Val rms_score: 0.3946
69
+ 2025-09-26 05:49:46,778 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 56/100 | Train Loss: 0.0247 | Val rms_score: 0.3988
70
+ 2025-09-26 05:49:59,502 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 57/100 | Train Loss: 0.0233 | Val rms_score: 0.3884
71
+ 2025-09-26 05:50:09,412 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 58/100 | Train Loss: 0.0245 | Val rms_score: 0.3923
72
+ 2025-09-26 05:50:22,153 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 59/100 | Train Loss: 0.0234 | Val rms_score: 0.3926
73
+ 2025-09-26 05:50:32,462 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 60/100 | Train Loss: 0.0249 | Val rms_score: 0.3921
74
+ 2025-09-26 05:50:44,580 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 61/100 | Train Loss: 0.0226 | Val rms_score: 0.3949
75
+ 2025-09-26 05:50:58,189 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 62/100 | Train Loss: 0.0228 | Val rms_score: 0.3918
76
+ 2025-09-26 05:51:08,391 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 63/100 | Train Loss: 0.0204 | Val rms_score: 0.3930
77
+ 2025-09-26 05:51:21,249 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 64/100 | Train Loss: 0.0243 | Val rms_score: 0.3909
78
+ 2025-09-26 05:51:31,650 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 65/100 | Train Loss: 0.0201 | Val rms_score: 0.3927
79
+ 2025-09-26 05:51:44,009 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 66/100 | Train Loss: 0.0224 | Val rms_score: 0.3899
80
+ 2025-09-26 05:51:57,361 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 67/100 | Train Loss: 0.0214 | Val rms_score: 0.3954
81
+ 2025-09-26 05:52:07,594 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 68/100 | Train Loss: 0.0214 | Val rms_score: 0.3945
82
+ 2025-09-26 05:52:19,304 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 69/100 | Train Loss: 0.0230 | Val rms_score: 0.3906
83
+ 2025-09-26 05:52:31,720 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 70/100 | Train Loss: 0.0205 | Val rms_score: 0.3939
84
+ 2025-09-26 05:52:41,447 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 71/100 | Train Loss: 0.0211 | Val rms_score: 0.3902
85
+ 2025-09-26 05:52:53,908 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 72/100 | Train Loss: 0.0217 | Val rms_score: 0.3904
86
+ 2025-09-26 05:53:03,748 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 73/100 | Train Loss: 0.0206 | Val rms_score: 0.3909
87
+ 2025-09-26 05:53:15,993 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 74/100 | Train Loss: 0.0205 | Val rms_score: 0.3950
88
+ 2025-09-26 05:53:30,044 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 75/100 | Train Loss: 0.0205 | Val rms_score: 0.3908
89
+ 2025-09-26 05:53:39,847 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 76/100 | Train Loss: 0.0198 | Val rms_score: 0.3914
90
+ 2025-09-26 05:53:52,428 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 77/100 | Train Loss: 0.0188 | Val rms_score: 0.3919
91
+ 2025-09-26 05:54:01,996 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 78/100 | Train Loss: 0.0201 | Val rms_score: 0.3887
92
+ 2025-09-26 05:54:14,502 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 79/100 | Train Loss: 0.0198 | Val rms_score: 0.3938
93
+ 2025-09-26 05:54:26,954 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 80/100 | Train Loss: 0.0211 | Val rms_score: 0.3886
94
+ 2025-09-26 05:54:36,423 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 81/100 | Train Loss: 0.0192 | Val rms_score: 0.3913
95
+ 2025-09-26 05:54:49,552 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 82/100 | Train Loss: 0.0201 | Val rms_score: 0.3876
96
+ 2025-09-26 05:54:59,376 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 83/100 | Train Loss: 0.0198 | Val rms_score: 0.3934
97
+ 2025-09-26 05:55:11,856 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 84/100 | Train Loss: 0.0183 | Val rms_score: 0.3917
98
+ 2025-09-26 05:55:24,450 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 85/100 | Train Loss: 0.0192 | Val rms_score: 0.3911
99
+ 2025-09-26 05:55:33,840 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 86/100 | Train Loss: 0.0186 | Val rms_score: 0.3900
100
+ 2025-09-26 05:55:46,230 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 87/100 | Train Loss: 0.0187 | Val rms_score: 0.3923
101
+ 2025-09-26 05:55:58,257 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 88/100 | Train Loss: 0.0180 | Val rms_score: 0.3952
102
+ 2025-09-26 05:56:07,765 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 89/100 | Train Loss: 0.0175 | Val rms_score: 0.3897
103
+ 2025-09-26 05:56:20,002 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 90/100 | Train Loss: 0.0182 | Val rms_score: 0.3898
104
+ 2025-09-26 05:56:29,547 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 91/100 | Train Loss: 0.0193 | Val rms_score: 0.3953
105
+ 2025-09-26 05:56:42,798 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 92/100 | Train Loss: 0.0187 | Val rms_score: 0.3919
106
+ 2025-09-26 05:56:56,518 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 93/100 | Train Loss: 0.0212 | Val rms_score: 0.3912
107
+ 2025-09-26 05:57:06,560 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 94/100 | Train Loss: 0.0195 | Val rms_score: 0.3919
108
+ 2025-09-26 05:57:18,468 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 95/100 | Train Loss: 0.0181 | Val rms_score: 0.3914
109
+ 2025-09-26 05:57:27,836 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 96/100 | Train Loss: 0.0198 | Val rms_score: 0.3924
110
+ 2025-09-26 05:57:40,290 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 97/100 | Train Loss: 0.0183 | Val rms_score: 0.3895
111
+ 2025-09-26 05:57:51,972 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 98/100 | Train Loss: 0.0182 | Val rms_score: 0.3894
112
+ 2025-09-26 05:58:02,128 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 99/100 | Train Loss: 0.0179 | Val rms_score: 0.3914
113
+ 2025-09-26 05:58:13,288 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 100/100 | Train Loss: 0.0187 | Val rms_score: 0.3886
114
+ 2025-09-26 05:58:14,453 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Test rms_score: 0.4197
115
+ 2025-09-26 05:58:14,798 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Starting triplicate run 2 for dataset adme_microsom_stab_h at 2025-09-26_05-58-14
116
+ 2025-09-26 05:58:25,045 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 1/100 | Train Loss: 0.9167 | Val rms_score: 0.4481
117
+ 2025-09-26 05:58:25,045 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Global step of best model: 54
118
+ 2025-09-26 05:58:25,949 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Best model saved at epoch 1 with val rms_score: 0.4481
119
+ 2025-09-26 05:58:36,339 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 2/100 | Train Loss: 0.7734 | Val rms_score: 0.4455
120
+ 2025-09-26 05:58:36,547 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Global step of best model: 108
121
+ 2025-09-26 05:58:37,215 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Best model saved at epoch 2 with val rms_score: 0.4455
122
+ 2025-09-26 05:58:50,031 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 3/100 | Train Loss: 0.6296 | Val rms_score: 0.4246
123
+ 2025-09-26 05:58:50,253 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Global step of best model: 162
124
+ 2025-09-26 05:58:50,882 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Best model saved at epoch 3 with val rms_score: 0.4246
125
+ 2025-09-26 05:59:01,471 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 4/100 | Train Loss: 0.5391 | Val rms_score: 0.4149
126
+ 2025-09-26 05:59:01,689 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Global step of best model: 216
127
+ 2025-09-26 05:59:02,512 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Best model saved at epoch 4 with val rms_score: 0.4149
128
+ 2025-09-26 05:59:16,380 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 5/100 | Train Loss: 0.4259 | Val rms_score: 0.4390
129
+ 2025-09-26 05:59:27,256 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 6/100 | Train Loss: 0.3320 | Val rms_score: 0.4054
130
+ 2025-09-26 05:59:27,922 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Global step of best model: 324
131
+ 2025-09-26 05:59:28,628 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Best model saved at epoch 6 with val rms_score: 0.4054
132
+ 2025-09-26 05:59:41,875 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 7/100 | Train Loss: 0.2604 | Val rms_score: 0.4045
133
+ 2025-09-26 05:59:42,095 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Global step of best model: 378
134
+ 2025-09-26 05:59:42,748 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Best model saved at epoch 7 with val rms_score: 0.4045
135
+ 2025-09-26 05:59:55,277 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 8/100 | Train Loss: 0.2041 | Val rms_score: 0.3998
136
+ 2025-09-26 05:59:55,482 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Global step of best model: 432
137
+ 2025-09-26 05:59:56,100 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Best model saved at epoch 8 with val rms_score: 0.3998
138
+ 2025-09-26 06:00:05,957 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 9/100 | Train Loss: 0.1748 | Val rms_score: 0.3945
139
+ 2025-09-26 06:00:06,164 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Global step of best model: 486
140
+ 2025-09-26 06:00:06,794 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Best model saved at epoch 9 with val rms_score: 0.3945
141
+ 2025-09-26 06:00:18,846 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 10/100 | Train Loss: 0.1289 | Val rms_score: 0.4162
142
+ 2025-09-26 06:00:28,904 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 11/100 | Train Loss: 0.1094 | Val rms_score: 0.4017
143
+ 2025-09-26 06:00:41,613 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 12/100 | Train Loss: 0.0951 | Val rms_score: 0.4096
144
+ 2025-09-26 06:00:53,100 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 13/100 | Train Loss: 0.1328 | Val rms_score: 0.4141
145
+ 2025-09-26 06:01:03,263 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 14/100 | Train Loss: 0.0787 | Val rms_score: 0.4052
146
+ 2025-09-26 06:01:15,485 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 15/100 | Train Loss: 0.0590 | Val rms_score: 0.3999
147
+ 2025-09-26 06:01:27,709 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 16/100 | Train Loss: 0.0518 | Val rms_score: 0.4060
148
+ 2025-09-26 06:01:38,178 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 17/100 | Train Loss: 0.0540 | Val rms_score: 0.4125
149
+ 2025-09-26 06:01:50,445 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 18/100 | Train Loss: 0.0506 | Val rms_score: 0.4090
150
+ 2025-09-26 06:02:01,969 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 19/100 | Train Loss: 0.0463 | Val rms_score: 0.4000
151
+ 2025-09-26 06:02:13,918 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 20/100 | Train Loss: 0.0448 | Val rms_score: 0.4074
152
+ 2025-09-26 06:02:25,954 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 21/100 | Train Loss: 0.0455 | Val rms_score: 0.4001
153
+ 2025-09-26 06:02:36,218 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 22/100 | Train Loss: 0.0451 | Val rms_score: 0.4065
154
+ 2025-09-26 06:02:48,548 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 23/100 | Train Loss: 0.0439 | Val rms_score: 0.4009
155
+ 2025-09-26 06:02:58,722 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 24/100 | Train Loss: 0.0420 | Val rms_score: 0.4056
156
+ 2025-09-26 06:03:09,877 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 25/100 | Train Loss: 0.0397 | Val rms_score: 0.3928
157
+ 2025-09-26 06:03:10,051 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Global step of best model: 1350
158
+ 2025-09-26 06:03:10,703 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Best model saved at epoch 25 with val rms_score: 0.3928
159
+ 2025-09-26 06:03:23,306 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 26/100 | Train Loss: 0.0339 | Val rms_score: 0.4074
160
+ 2025-09-26 06:03:33,603 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 27/100 | Train Loss: 0.0349 | Val rms_score: 0.4122
161
+ 2025-09-26 06:03:46,014 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 28/100 | Train Loss: 0.0303 | Val rms_score: 0.3994
162
+ 2025-09-26 06:03:55,721 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 29/100 | Train Loss: 0.0328 | Val rms_score: 0.3955
163
+ 2025-09-26 06:04:07,884 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 30/100 | Train Loss: 0.0295 | Val rms_score: 0.4047
164
+ 2025-09-26 06:04:20,589 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 31/100 | Train Loss: 0.0299 | Val rms_score: 0.3980
165
+ 2025-09-26 06:04:31,936 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 32/100 | Train Loss: 0.0301 | Val rms_score: 0.3962
166
+ 2025-09-26 06:04:45,780 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 33/100 | Train Loss: 0.0289 | Val rms_score: 0.4007
167
+ 2025-09-26 06:04:56,631 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 34/100 | Train Loss: 0.0295 | Val rms_score: 0.3975
168
+ 2025-09-26 06:05:10,457 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 35/100 | Train Loss: 0.0327 | Val rms_score: 0.3994
169
+ 2025-09-26 06:05:24,080 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 36/100 | Train Loss: 0.0286 | Val rms_score: 0.3993
170
+ 2025-09-26 06:05:35,062 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 37/100 | Train Loss: 0.0282 | Val rms_score: 0.4000
171
+ 2025-09-26 06:05:49,339 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 38/100 | Train Loss: 0.0288 | Val rms_score: 0.3953
172
+ 2025-09-26 06:05:59,701 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 39/100 | Train Loss: 0.0246 | Val rms_score: 0.4009
173
+ 2025-09-26 06:06:12,272 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 40/100 | Train Loss: 0.0266 | Val rms_score: 0.3951
174
+ 2025-09-26 06:06:25,577 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 41/100 | Train Loss: 0.0255 | Val rms_score: 0.3957
175
+ 2025-09-26 06:06:36,734 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 42/100 | Train Loss: 0.0258 | Val rms_score: 0.3965
176
+ 2025-09-26 06:06:49,588 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 43/100 | Train Loss: 0.0224 | Val rms_score: 0.3953
177
+ 2025-09-26 06:07:00,024 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 44/100 | Train Loss: 0.0247 | Val rms_score: 0.3959
178
+ 2025-09-26 06:07:12,404 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 45/100 | Train Loss: 0.0219 | Val rms_score: 0.3997
179
+ 2025-09-26 06:07:23,280 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 46/100 | Train Loss: 0.0242 | Val rms_score: 0.4026
180
+ 2025-09-26 06:07:36,571 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 47/100 | Train Loss: 0.0251 | Val rms_score: 0.3991
181
+ 2025-09-26 06:07:49,144 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 48/100 | Train Loss: 0.0217 | Val rms_score: 0.4027
182
+ 2025-09-26 06:07:59,287 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 49/100 | Train Loss: 0.0234 | Val rms_score: 0.4028
183
+ 2025-09-26 06:08:11,874 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 50/100 | Train Loss: 0.0239 | Val rms_score: 0.3973
184
+ 2025-09-26 06:08:22,643 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 51/100 | Train Loss: 0.0227 | Val rms_score: 0.4028
185
+ 2025-09-26 06:08:36,781 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 52/100 | Train Loss: 0.0255 | Val rms_score: 0.3952
186
+ 2025-09-26 06:08:50,270 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 53/100 | Train Loss: 0.0213 | Val rms_score: 0.3985
187
+ 2025-09-26 06:09:02,209 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 54/100 | Train Loss: 0.0215 | Val rms_score: 0.3947
188
+ 2025-09-26 06:09:16,085 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 55/100 | Train Loss: 0.0233 | Val rms_score: 0.3958
189
+ 2025-09-26 06:09:27,830 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 56/100 | Train Loss: 0.0223 | Val rms_score: 0.3947
190
+ 2025-09-26 06:09:40,754 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 57/100 | Train Loss: 0.0210 | Val rms_score: 0.4017
191
+ 2025-09-26 06:09:53,466 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 58/100 | Train Loss: 0.0219 | Val rms_score: 0.3957
192
+ 2025-09-26 06:10:03,482 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 59/100 | Train Loss: 0.0207 | Val rms_score: 0.3977
193
+ 2025-09-26 06:10:16,781 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 60/100 | Train Loss: 0.0213 | Val rms_score: 0.3971
194
+ 2025-09-26 06:10:27,673 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 61/100 | Train Loss: 0.0210 | Val rms_score: 0.3961
195
+ 2025-09-26 06:10:41,830 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 62/100 | Train Loss: 0.0215 | Val rms_score: 0.3985
196
+ 2025-09-26 06:10:52,674 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 63/100 | Train Loss: 0.0156 | Val rms_score: 0.3955
197
+ 2025-09-26 06:11:06,012 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 64/100 | Train Loss: 0.0201 | Val rms_score: 0.3957
198
+ 2025-09-26 06:11:19,140 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 65/100 | Train Loss: 0.0214 | Val rms_score: 0.3958
199
+ 2025-09-26 06:11:30,234 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 66/100 | Train Loss: 0.0190 | Val rms_score: 0.3933
200
+ 2025-09-26 06:11:43,980 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 67/100 | Train Loss: 0.0188 | Val rms_score: 0.3949
201
+ 2025-09-26 06:11:55,446 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 68/100 | Train Loss: 0.0188 | Val rms_score: 0.3965
202
+ 2025-09-26 06:12:07,990 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 69/100 | Train Loss: 0.0195 | Val rms_score: 0.3916
203
+ 2025-09-26 06:12:08,192 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Global step of best model: 3726
204
+ 2025-09-26 06:12:08,937 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Best model saved at epoch 69 with val rms_score: 0.3916
205
+ 2025-09-26 06:12:22,786 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 70/100 | Train Loss: 0.0182 | Val rms_score: 0.3941
206
+ 2025-09-26 06:12:33,110 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 71/100 | Train Loss: 0.0198 | Val rms_score: 0.3968
207
+ 2025-09-26 06:12:47,063 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 72/100 | Train Loss: 0.0198 | Val rms_score: 0.3947
208
+ 2025-09-26 06:12:58,260 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 73/100 | Train Loss: 0.0193 | Val rms_score: 0.3944
209
+ 2025-09-26 06:13:11,746 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 74/100 | Train Loss: 0.0187 | Val rms_score: 0.3940
210
+ 2025-09-26 06:13:23,559 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 75/100 | Train Loss: 0.0189 | Val rms_score: 0.3937
211
+ 2025-09-26 06:13:37,476 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 76/100 | Train Loss: 0.0176 | Val rms_score: 0.3951
212
+ 2025-09-26 06:13:51,121 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 77/100 | Train Loss: 0.0188 | Val rms_score: 0.3962
213
+ 2025-09-26 06:14:01,558 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 78/100 | Train Loss: 0.0191 | Val rms_score: 0.3936
214
+ 2025-09-26 06:14:14,560 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 79/100 | Train Loss: 0.0178 | Val rms_score: 0.3937
215
+ 2025-09-26 06:14:24,796 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 80/100 | Train Loss: 0.0193 | Val rms_score: 0.3945
216
+ 2025-09-26 06:14:37,663 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 81/100 | Train Loss: 0.0174 | Val rms_score: 0.3951
217
+ 2025-09-26 06:14:50,413 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 82/100 | Train Loss: 0.0179 | Val rms_score: 0.3934
218
+ 2025-09-26 06:15:00,541 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 83/100 | Train Loss: 0.0176 | Val rms_score: 0.3939
219
+ 2025-09-26 06:15:13,117 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 84/100 | Train Loss: 0.0192 | Val rms_score: 0.3956
220
+ 2025-09-26 06:15:23,532 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 85/100 | Train Loss: 0.0174 | Val rms_score: 0.3921
221
+ 2025-09-26 06:15:36,970 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 86/100 | Train Loss: 0.0181 | Val rms_score: 0.3913
222
+ 2025-09-26 06:15:37,440 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Global step of best model: 4644
223
+ 2025-09-26 06:15:38,125 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Best model saved at epoch 86 with val rms_score: 0.3913
224
+ 2025-09-26 06:15:49,984 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 87/100 | Train Loss: 0.0187 | Val rms_score: 0.3948
225
+ 2025-09-26 06:16:00,237 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 88/100 | Train Loss: 0.0187 | Val rms_score: 0.3906
226
+ 2025-09-26 06:16:00,422 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Global step of best model: 4752
227
+ 2025-09-26 06:16:01,106 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Best model saved at epoch 88 with val rms_score: 0.3906
228
+ 2025-09-26 06:16:13,583 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 89/100 | Train Loss: 0.0165 | Val rms_score: 0.3911
229
+ 2025-09-26 06:16:23,590 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 90/100 | Train Loss: 0.0174 | Val rms_score: 0.3966
230
+ 2025-09-26 06:16:36,305 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 91/100 | Train Loss: 0.0163 | Val rms_score: 0.3947
231
+ 2025-09-26 06:16:50,338 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 92/100 | Train Loss: 0.0171 | Val rms_score: 0.3958
232
+ 2025-09-26 06:17:02,320 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 93/100 | Train Loss: 0.0177 | Val rms_score: 0.3921
233
+ 2025-09-26 06:17:15,499 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 94/100 | Train Loss: 0.0182 | Val rms_score: 0.3932
234
+ 2025-09-26 06:17:27,953 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 95/100 | Train Loss: 0.0160 | Val rms_score: 0.3932
235
+ 2025-09-26 06:17:41,044 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 96/100 | Train Loss: 0.0167 | Val rms_score: 0.3949
236
+ 2025-09-26 06:17:52,742 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 97/100 | Train Loss: 0.0168 | Val rms_score: 0.3926
237
+ 2025-09-26 06:18:05,999 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 98/100 | Train Loss: 0.0179 | Val rms_score: 0.3927
238
+ 2025-09-26 06:18:19,407 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 99/100 | Train Loss: 0.0175 | Val rms_score: 0.3955
239
+ 2025-09-26 06:18:29,537 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 100/100 | Train Loss: 0.0164 | Val rms_score: 0.3965
240
+ 2025-09-26 06:18:30,429 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Test rms_score: 0.4278
241
+ 2025-09-26 06:18:30,907 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Starting triplicate run 3 for dataset adme_microsom_stab_h at 2025-09-26_06-18-30
242
+ 2025-09-26 06:18:43,045 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 1/100 | Train Loss: 1.0370 | Val rms_score: 0.4828
243
+ 2025-09-26 06:18:43,045 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Global step of best model: 54
244
+ 2025-09-26 06:18:43,974 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Best model saved at epoch 1 with val rms_score: 0.4828
245
+ 2025-09-26 06:18:54,648 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 2/100 | Train Loss: 0.6641 | Val rms_score: 0.4426
246
+ 2025-09-26 06:18:54,843 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Global step of best model: 108
247
+ 2025-09-26 06:18:55,648 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Best model saved at epoch 2 with val rms_score: 0.4426
248
+ 2025-09-26 06:19:09,924 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 3/100 | Train Loss: 0.6250 | Val rms_score: 0.4185
249
+ 2025-09-26 06:19:10,142 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Global step of best model: 162
250
+ 2025-09-26 06:19:10,895 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Best model saved at epoch 3 with val rms_score: 0.4185
251
+ 2025-09-26 06:19:22,277 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 4/100 | Train Loss: 0.4941 | Val rms_score: 0.4436
252
+ 2025-09-26 06:19:35,644 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 5/100 | Train Loss: 0.4375 | Val rms_score: 0.4029
253
+ 2025-09-26 06:19:35,867 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Global step of best model: 270
254
+ 2025-09-26 06:19:36,503 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Best model saved at epoch 5 with val rms_score: 0.4029
255
+ 2025-09-26 06:19:47,024 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 6/100 | Train Loss: 0.3333 | Val rms_score: 0.4173
256
+ 2025-09-26 06:20:00,859 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 7/100 | Train Loss: 0.2986 | Val rms_score: 0.4116
257
+ 2025-09-26 06:20:14,677 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 8/100 | Train Loss: 0.2207 | Val rms_score: 0.4125
258
+ 2025-09-26 06:20:26,371 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 9/100 | Train Loss: 0.1968 | Val rms_score: 0.4263
259
+ 2025-09-26 06:20:39,942 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 10/100 | Train Loss: 0.1430 | Val rms_score: 0.4186
260
+ 2025-09-26 06:20:50,980 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 11/100 | Train Loss: 0.1175 | Val rms_score: 0.4197
261
+ 2025-09-26 06:21:05,176 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 12/100 | Train Loss: 0.1055 | Val rms_score: 0.4165
262
+ 2025-09-26 06:21:18,011 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 13/100 | Train Loss: 0.0981 | Val rms_score: 0.4414
263
+ 2025-09-26 06:21:28,525 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 14/100 | Train Loss: 0.0810 | Val rms_score: 0.4240
264
+ 2025-09-26 06:21:41,564 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 15/100 | Train Loss: 0.0785 | Val rms_score: 0.4135
265
+ 2025-09-26 06:21:52,348 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 16/100 | Train Loss: 0.0637 | Val rms_score: 0.4130
266
+ 2025-09-26 06:22:05,970 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 17/100 | Train Loss: 0.0673 | Val rms_score: 0.4112
267
+ 2025-09-26 06:22:16,501 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 18/100 | Train Loss: 0.0570 | Val rms_score: 0.4094
268
+ 2025-09-26 06:22:30,875 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 19/100 | Train Loss: 0.0565 | Val rms_score: 0.4183
269
+ 2025-09-26 06:22:44,691 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 20/100 | Train Loss: 0.0547 | Val rms_score: 0.4248
270
+ 2025-09-26 06:22:55,663 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 21/100 | Train Loss: 0.0597 | Val rms_score: 0.4166
271
+ 2025-09-26 06:23:09,934 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 22/100 | Train Loss: 0.0460 | Val rms_score: 0.4232
272
+ 2025-09-26 06:23:21,698 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 23/100 | Train Loss: 0.0441 | Val rms_score: 0.4147
273
+ 2025-09-26 06:23:35,487 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 24/100 | Train Loss: 0.0417 | Val rms_score: 0.4180
274
+ 2025-09-26 06:23:47,078 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 25/100 | Train Loss: 0.0383 | Val rms_score: 0.4191
275
+ 2025-09-26 06:24:00,843 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 26/100 | Train Loss: 0.0356 | Val rms_score: 0.4203
276
+ 2025-09-26 06:24:15,231 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 27/100 | Train Loss: 0.0394 | Val rms_score: 0.4182
277
+ 2025-09-26 06:24:27,102 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 28/100 | Train Loss: 0.0391 | Val rms_score: 0.4232
278
+ 2025-09-26 06:24:41,182 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 29/100 | Train Loss: 0.0382 | Val rms_score: 0.4144
279
+ 2025-09-26 06:24:52,830 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 30/100 | Train Loss: 0.0346 | Val rms_score: 0.4164
280
+ 2025-09-26 06:25:06,339 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 31/100 | Train Loss: 0.0347 | Val rms_score: 0.4120
281
+ 2025-09-26 06:25:17,054 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 32/100 | Train Loss: 0.0336 | Val rms_score: 0.4131
282
+ 2025-09-26 06:25:29,881 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 33/100 | Train Loss: 0.0326 | Val rms_score: 0.4150
283
+ 2025-09-26 06:25:42,572 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 34/100 | Train Loss: 0.0306 | Val rms_score: 0.4196
284
+ 2025-09-26 06:25:52,954 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 35/100 | Train Loss: 0.0327 | Val rms_score: 0.4130
285
+ 2025-09-26 06:26:06,039 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 36/100 | Train Loss: 0.0320 | Val rms_score: 0.4133
286
+ 2025-09-26 06:26:16,298 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 37/100 | Train Loss: 0.0314 | Val rms_score: 0.4128
287
+ 2025-09-26 06:26:31,524 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 38/100 | Train Loss: 0.0314 | Val rms_score: 0.4106
288
+ 2025-09-26 06:26:45,337 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 39/100 | Train Loss: 0.0299 | Val rms_score: 0.4109
289
+ 2025-09-26 06:26:56,604 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 40/100 | Train Loss: 0.0302 | Val rms_score: 0.4116
290
+ 2025-09-26 06:27:09,278 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 41/100 | Train Loss: 0.0322 | Val rms_score: 0.4071
291
+ 2025-09-26 06:27:20,387 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 42/100 | Train Loss: 0.0292 | Val rms_score: 0.4089
292
+ 2025-09-26 06:27:33,578 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 43/100 | Train Loss: 0.0266 | Val rms_score: 0.4070
293
+ 2025-09-26 06:27:44,309 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 44/100 | Train Loss: 0.0284 | Val rms_score: 0.4061
294
+ 2025-09-26 06:27:56,626 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 45/100 | Train Loss: 0.0273 | Val rms_score: 0.4099
295
+ 2025-09-26 06:28:08,558 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 46/100 | Train Loss: 0.0272 | Val rms_score: 0.4067
296
+ 2025-09-26 06:28:18,616 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 47/100 | Train Loss: 0.0246 | Val rms_score: 0.4095
297
+ 2025-09-26 06:28:29,987 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 48/100 | Train Loss: 0.0281 | Val rms_score: 0.4076
298
+ 2025-09-26 06:28:42,563 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 49/100 | Train Loss: 0.0265 | Val rms_score: 0.4077
299
+ 2025-09-26 06:28:52,026 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 50/100 | Train Loss: 0.0253 | Val rms_score: 0.4100
300
+ 2025-09-26 06:29:03,891 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 51/100 | Train Loss: 0.0262 | Val rms_score: 0.4108
301
+ 2025-09-26 06:29:13,792 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 52/100 | Train Loss: 0.0280 | Val rms_score: 0.4113
302
+ 2025-09-26 06:29:26,146 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 53/100 | Train Loss: 0.0258 | Val rms_score: 0.4096
303
+ 2025-09-26 06:29:37,692 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 54/100 | Train Loss: 0.0282 | Val rms_score: 0.4139
304
+ 2025-09-26 06:29:47,603 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 55/100 | Train Loss: 0.0242 | Val rms_score: 0.4061
305
+ 2025-09-26 06:30:01,299 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 56/100 | Train Loss: 0.0247 | Val rms_score: 0.4147
306
+ 2025-09-26 06:30:14,040 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 57/100 | Train Loss: 0.0246 | Val rms_score: 0.4130
307
+ 2025-09-26 06:30:23,759 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 58/100 | Train Loss: 0.0237 | Val rms_score: 0.4119
308
+ 2025-09-26 06:30:36,012 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 59/100 | Train Loss: 0.0242 | Val rms_score: 0.4089
309
+ 2025-09-26 06:30:45,750 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 60/100 | Train Loss: 0.0226 | Val rms_score: 0.4090
310
+ 2025-09-26 06:30:57,164 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 61/100 | Train Loss: 0.0233 | Val rms_score: 0.4070
311
+ 2025-09-26 06:31:10,399 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 62/100 | Train Loss: 0.0228 | Val rms_score: 0.4086
312
+ 2025-09-26 06:31:20,290 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 63/100 | Train Loss: 0.0254 | Val rms_score: 0.4072
313
+ 2025-09-26 06:31:33,315 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 64/100 | Train Loss: 0.0230 | Val rms_score: 0.4097
314
+ 2025-09-26 06:31:43,115 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 65/100 | Train Loss: 0.0229 | Val rms_score: 0.4131
315
+ 2025-09-26 06:31:55,377 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 66/100 | Train Loss: 0.0216 | Val rms_score: 0.4096
316
+ 2025-09-26 06:32:08,608 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 67/100 | Train Loss: 0.0225 | Val rms_score: 0.4073
317
+ 2025-09-26 06:32:18,388 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 68/100 | Train Loss: 0.0198 | Val rms_score: 0.4093
318
+ 2025-09-26 06:32:29,661 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 69/100 | Train Loss: 0.0213 | Val rms_score: 0.4123
319
+ 2025-09-26 06:32:43,207 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 70/100 | Train Loss: 0.0230 | Val rms_score: 0.4108
320
+ 2025-09-26 06:32:54,463 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 71/100 | Train Loss: 0.0225 | Val rms_score: 0.4068
321
+ 2025-09-26 06:33:08,898 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 72/100 | Train Loss: 0.0210 | Val rms_score: 0.4070
322
+ 2025-09-26 06:33:20,757 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 73/100 | Train Loss: 0.0216 | Val rms_score: 0.4094
323
+ 2025-09-26 06:33:34,934 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 74/100 | Train Loss: 0.0211 | Val rms_score: 0.4051
324
+ 2025-09-26 06:33:46,714 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 75/100 | Train Loss: 0.0219 | Val rms_score: 0.4098
325
+ 2025-09-26 06:33:59,660 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 76/100 | Train Loss: 0.0216 | Val rms_score: 0.4061
326
+ 2025-09-26 06:34:11,185 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 77/100 | Train Loss: 0.0200 | Val rms_score: 0.4069
327
+ 2025-09-26 06:34:24,086 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 78/100 | Train Loss: 0.0197 | Val rms_score: 0.4095
328
+ 2025-09-26 06:34:38,114 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 79/100 | Train Loss: 0.0198 | Val rms_score: 0.4054
329
+ 2025-09-26 06:34:49,940 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 80/100 | Train Loss: 0.0206 | Val rms_score: 0.4037
330
+ 2025-09-26 06:35:03,760 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 81/100 | Train Loss: 0.0198 | Val rms_score: 0.4073
331
+ 2025-09-26 06:35:15,750 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 82/100 | Train Loss: 0.0187 | Val rms_score: 0.4108
332
+ 2025-09-26 06:35:29,885 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 83/100 | Train Loss: 0.0204 | Val rms_score: 0.4061
333
+ 2025-09-26 06:35:41,388 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 84/100 | Train Loss: 0.0201 | Val rms_score: 0.4048
334
+ 2025-09-26 06:35:55,657 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 85/100 | Train Loss: 0.0195 | Val rms_score: 0.4097
335
+ 2025-09-26 06:36:09,684 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 86/100 | Train Loss: 0.0206 | Val rms_score: 0.4098
336
+ 2025-09-26 06:36:21,308 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 87/100 | Train Loss: 0.0203 | Val rms_score: 0.4080
337
+ 2025-09-26 06:36:33,836 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 88/100 | Train Loss: 0.0189 | Val rms_score: 0.4037
338
+ 2025-09-26 06:36:44,826 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 89/100 | Train Loss: 0.0203 | Val rms_score: 0.4067
339
+ 2025-09-26 06:36:58,627 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 90/100 | Train Loss: 0.0191 | Val rms_score: 0.4077
340
+ 2025-09-26 06:37:09,544 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 91/100 | Train Loss: 0.0179 | Val rms_score: 0.4066
341
+ 2025-09-26 06:37:23,616 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 92/100 | Train Loss: 0.0179 | Val rms_score: 0.4041
342
+ 2025-09-26 06:37:37,773 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 93/100 | Train Loss: 0.0193 | Val rms_score: 0.4045
343
+ 2025-09-26 06:37:48,165 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 94/100 | Train Loss: 0.0191 | Val rms_score: 0.4052
344
+ 2025-09-26 06:38:00,780 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 95/100 | Train Loss: 0.0172 | Val rms_score: 0.4093
345
+ 2025-09-26 06:38:13,715 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 96/100 | Train Loss: 0.0198 | Val rms_score: 0.4054
346
+ 2025-09-26 06:38:27,270 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 97/100 | Train Loss: 0.0187 | Val rms_score: 0.4068
347
+ 2025-09-26 06:38:38,611 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 98/100 | Train Loss: 0.0185 | Val rms_score: 0.4042
348
+ 2025-09-26 06:38:47,524 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 99/100 | Train Loss: 0.0189 | Val rms_score: 0.4031
349
+ 2025-09-26 06:39:01,216 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 100/100 | Train Loss: 0.0182 | Val rms_score: 0.4068
350
+ 2025-09-26 06:39:02,328 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Test rms_score: 0.4121
351
+ 2025-09-26 06:39:02,804 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Final Triplicate Test Results — Avg rms_score: 0.4199, Std Dev: 0.0064
logs_modchembert_regression_ModChemBERT-MLM-DAPT/modchembert_deepchem_splits_run_adme_microsom_stab_r_epochs100_batch_size32_20250926_075143.log ADDED
@@ -0,0 +1,337 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ 2025-09-26 07:51:43,941 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Running benchmark for dataset: adme_microsom_stab_r
2
+ 2025-09-26 07:51:43,941 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - dataset: adme_microsom_stab_r, tasks: ['y'], epochs: 100, learning rate: 3e-05, transform: True
3
+ 2025-09-26 07:51:43,953 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Starting triplicate run 1 for dataset adme_microsom_stab_r at 2025-09-26_07-51-43
4
+ 2025-09-26 07:51:51,671 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 1/100 | Train Loss: 0.8456 | Val rms_score: 0.5527
5
+ 2025-09-26 07:51:51,671 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Global step of best model: 68
6
+ 2025-09-26 07:51:52,335 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Best model saved at epoch 1 with val rms_score: 0.5527
7
+ 2025-09-26 07:51:59,120 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 2/100 | Train Loss: 0.6007 | Val rms_score: 0.5502
8
+ 2025-09-26 07:51:59,354 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Global step of best model: 136
9
+ 2025-09-26 07:51:59,966 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Best model saved at epoch 2 with val rms_score: 0.5502
10
+ 2025-09-26 07:52:07,884 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 3/100 | Train Loss: 0.3281 | Val rms_score: 0.5227
11
+ 2025-09-26 07:52:08,073 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Global step of best model: 204
12
+ 2025-09-26 07:52:08,702 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Best model saved at epoch 3 with val rms_score: 0.5227
13
+ 2025-09-26 07:52:16,214 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 4/100 | Train Loss: 0.3897 | Val rms_score: 0.5451
14
+ 2025-09-26 07:52:23,608 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 5/100 | Train Loss: 0.3219 | Val rms_score: 0.5283
15
+ 2025-09-26 07:52:31,317 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 6/100 | Train Loss: 0.2734 | Val rms_score: 0.5255
16
+ 2025-09-26 07:52:38,973 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 7/100 | Train Loss: 0.2160 | Val rms_score: 0.5257
17
+ 2025-09-26 07:52:46,118 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 8/100 | Train Loss: 0.1768 | Val rms_score: 0.5100
18
+ 2025-09-26 07:52:46,272 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Global step of best model: 544
19
+ 2025-09-26 07:52:46,906 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Best model saved at epoch 8 with val rms_score: 0.5100
20
+ 2025-09-26 07:52:54,303 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 9/100 | Train Loss: 0.1660 | Val rms_score: 0.5357
21
+ 2025-09-26 07:53:01,832 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 10/100 | Train Loss: 0.1213 | Val rms_score: 0.5237
22
+ 2025-09-26 07:53:09,413 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 11/100 | Train Loss: 0.1055 | Val rms_score: 0.5294
23
+ 2025-09-26 07:53:22,152 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 12/100 | Train Loss: 0.1104 | Val rms_score: 0.5273
24
+ 2025-09-26 07:53:36,377 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 13/100 | Train Loss: 0.0777 | Val rms_score: 0.5277
25
+ 2025-09-26 07:53:50,521 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 14/100 | Train Loss: 0.0733 | Val rms_score: 0.5338
26
+ 2025-09-26 07:54:05,329 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 15/100 | Train Loss: 0.0699 | Val rms_score: 0.5245
27
+ 2025-09-26 07:54:18,817 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 16/100 | Train Loss: 0.0593 | Val rms_score: 0.5324
28
+ 2025-09-26 07:54:32,249 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 17/100 | Train Loss: 0.0580 | Val rms_score: 0.5345
29
+ 2025-09-26 07:54:45,789 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 18/100 | Train Loss: 0.0654 | Val rms_score: 0.5426
30
+ 2025-09-26 07:54:59,018 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 19/100 | Train Loss: 0.0487 | Val rms_score: 0.5365
31
+ 2025-09-26 07:55:12,394 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 20/100 | Train Loss: 0.0469 | Val rms_score: 0.5358
32
+ 2025-09-26 07:55:25,883 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 21/100 | Train Loss: 0.0488 | Val rms_score: 0.5211
33
+ 2025-09-26 07:55:38,928 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 22/100 | Train Loss: 0.0416 | Val rms_score: 0.5298
34
+ 2025-09-26 07:55:52,550 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 23/100 | Train Loss: 0.0413 | Val rms_score: 0.5203
35
+ 2025-09-26 07:56:06,370 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 24/100 | Train Loss: 0.0376 | Val rms_score: 0.5209
36
+ 2025-09-26 07:56:20,213 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 25/100 | Train Loss: 0.0407 | Val rms_score: 0.5334
37
+ 2025-09-26 07:56:34,139 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 26/100 | Train Loss: 0.0372 | Val rms_score: 0.5382
38
+ 2025-09-26 07:56:47,852 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 27/100 | Train Loss: 0.0378 | Val rms_score: 0.5258
39
+ 2025-09-26 07:57:01,229 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 28/100 | Train Loss: 0.0342 | Val rms_score: 0.5276
40
+ 2025-09-26 07:57:15,377 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 29/100 | Train Loss: 0.0338 | Val rms_score: 0.5316
41
+ 2025-09-26 07:57:29,907 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 30/100 | Train Loss: 0.0322 | Val rms_score: 0.5216
42
+ 2025-09-26 07:57:43,964 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 31/100 | Train Loss: 0.0332 | Val rms_score: 0.5245
43
+ 2025-09-26 07:57:57,727 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 32/100 | Train Loss: 0.0317 | Val rms_score: 0.5251
44
+ 2025-09-26 07:58:10,830 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 33/100 | Train Loss: 0.0307 | Val rms_score: 0.5304
45
+ 2025-09-26 07:58:24,534 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 34/100 | Train Loss: 0.0312 | Val rms_score: 0.5336
46
+ 2025-09-26 07:58:38,594 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 35/100 | Train Loss: 0.0322 | Val rms_score: 0.5348
47
+ 2025-09-26 07:58:52,276 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 36/100 | Train Loss: 0.0308 | Val rms_score: 0.5304
48
+ 2025-09-26 07:59:06,603 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 37/100 | Train Loss: 0.0283 | Val rms_score: 0.5237
49
+ 2025-09-26 07:59:19,260 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 38/100 | Train Loss: 0.0283 | Val rms_score: 0.5291
50
+ 2025-09-26 07:59:33,222 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 39/100 | Train Loss: 0.0299 | Val rms_score: 0.5249
51
+ 2025-09-26 07:59:47,146 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 40/100 | Train Loss: 0.0289 | Val rms_score: 0.5289
52
+ 2025-09-26 08:00:01,156 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 41/100 | Train Loss: 0.0276 | Val rms_score: 0.5291
53
+ 2025-09-26 08:00:15,732 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 42/100 | Train Loss: 0.0283 | Val rms_score: 0.5254
54
+ 2025-09-26 08:00:28,759 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 43/100 | Train Loss: 0.0267 | Val rms_score: 0.5289
55
+ 2025-09-26 08:00:43,162 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 44/100 | Train Loss: 0.0259 | Val rms_score: 0.5198
56
+ 2025-09-26 08:00:57,550 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 45/100 | Train Loss: 0.0279 | Val rms_score: 0.5288
57
+ 2025-09-26 08:01:10,765 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 46/100 | Train Loss: 0.0254 | Val rms_score: 0.5248
58
+ 2025-09-26 08:01:25,159 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 47/100 | Train Loss: 0.0268 | Val rms_score: 0.5290
59
+ 2025-09-26 08:01:38,495 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 48/100 | Train Loss: 0.0243 | Val rms_score: 0.5297
60
+ 2025-09-26 08:01:52,771 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 49/100 | Train Loss: 0.0283 | Val rms_score: 0.5300
61
+ 2025-09-26 08:02:05,678 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 50/100 | Train Loss: 0.0253 | Val rms_score: 0.5293
62
+ 2025-09-26 08:02:19,363 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 51/100 | Train Loss: 0.0244 | Val rms_score: 0.5290
63
+ 2025-09-26 08:02:33,811 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 52/100 | Train Loss: 0.0241 | Val rms_score: 0.5304
64
+ 2025-09-26 08:02:47,310 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 53/100 | Train Loss: 0.0243 | Val rms_score: 0.5253
65
+ 2025-09-26 08:03:01,284 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 54/100 | Train Loss: 0.0231 | Val rms_score: 0.5244
66
+ 2025-09-26 08:03:14,461 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 55/100 | Train Loss: 0.0227 | Val rms_score: 0.5208
67
+ 2025-09-26 08:03:27,969 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 56/100 | Train Loss: 0.0229 | Val rms_score: 0.5275
68
+ 2025-09-26 08:03:41,535 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 57/100 | Train Loss: 0.0226 | Val rms_score: 0.5204
69
+ 2025-09-26 08:03:54,416 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 58/100 | Train Loss: 0.0256 | Val rms_score: 0.5278
70
+ 2025-09-26 08:04:10,236 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 59/100 | Train Loss: 0.0229 | Val rms_score: 0.5290
71
+ 2025-09-26 08:04:24,120 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 60/100 | Train Loss: 0.0229 | Val rms_score: 0.5246
72
+ 2025-09-26 08:04:36,075 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 61/100 | Train Loss: 0.0210 | Val rms_score: 0.5221
73
+ 2025-09-26 08:04:50,136 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 62/100 | Train Loss: 0.0237 | Val rms_score: 0.5245
74
+ 2025-09-26 08:05:02,970 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 63/100 | Train Loss: 0.0224 | Val rms_score: 0.5218
75
+ 2025-09-26 08:05:16,370 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 64/100 | Train Loss: 0.0224 | Val rms_score: 0.5202
76
+ 2025-09-26 08:05:28,725 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 65/100 | Train Loss: 0.0252 | Val rms_score: 0.5207
77
+ 2025-09-26 08:05:42,480 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 66/100 | Train Loss: 0.0211 | Val rms_score: 0.5237
78
+ 2025-09-26 08:05:56,668 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 67/100 | Train Loss: 0.0223 | Val rms_score: 0.5200
79
+ 2025-09-26 08:06:10,227 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 68/100 | Train Loss: 0.0194 | Val rms_score: 0.5231
80
+ 2025-09-26 08:06:24,308 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 69/100 | Train Loss: 0.0200 | Val rms_score: 0.5268
81
+ 2025-09-26 08:06:38,056 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 70/100 | Train Loss: 0.0212 | Val rms_score: 0.5210
82
+ 2025-09-26 08:06:51,786 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 71/100 | Train Loss: 0.0227 | Val rms_score: 0.5222
83
+ 2025-09-26 08:07:05,419 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 72/100 | Train Loss: 0.0208 | Val rms_score: 0.5249
84
+ 2025-09-26 08:07:18,354 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 73/100 | Train Loss: 0.0199 | Val rms_score: 0.5260
85
+ 2025-09-26 08:07:34,110 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 74/100 | Train Loss: 0.0192 | Val rms_score: 0.5237
86
+ 2025-09-26 08:07:48,095 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 75/100 | Train Loss: 0.0199 | Val rms_score: 0.5239
87
+ 2025-09-26 08:08:00,407 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 76/100 | Train Loss: 0.0203 | Val rms_score: 0.5220
88
+ 2025-09-26 08:08:15,127 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 77/100 | Train Loss: 0.0209 | Val rms_score: 0.5193
89
+ 2025-09-26 08:08:29,018 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 78/100 | Train Loss: 0.0232 | Val rms_score: 0.5206
90
+ 2025-09-26 08:08:43,611 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 79/100 | Train Loss: 0.0217 | Val rms_score: 0.5264
91
+ 2025-09-26 08:08:57,803 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 80/100 | Train Loss: 0.0202 | Val rms_score: 0.5246
92
+ 2025-09-26 08:09:11,866 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 81/100 | Train Loss: 0.0193 | Val rms_score: 0.5231
93
+ 2025-09-26 08:09:26,098 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 82/100 | Train Loss: 0.0202 | Val rms_score: 0.5279
94
+ 2025-09-26 08:09:39,653 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 83/100 | Train Loss: 0.0206 | Val rms_score: 0.5237
95
+ 2025-09-26 08:09:52,533 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 84/100 | Train Loss: 0.0197 | Val rms_score: 0.5223
96
+ 2025-09-26 08:10:05,095 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 85/100 | Train Loss: 0.0187 | Val rms_score: 0.5211
97
+ 2025-09-26 08:10:18,258 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 86/100 | Train Loss: 0.0203 | Val rms_score: 0.5211
98
+ 2025-09-26 08:10:31,768 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 87/100 | Train Loss: 0.0179 | Val rms_score: 0.5186
99
+ 2025-09-26 08:10:45,125 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 88/100 | Train Loss: 0.0202 | Val rms_score: 0.5216
100
+ 2025-09-26 08:11:00,065 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 89/100 | Train Loss: 0.0195 | Val rms_score: 0.5233
101
+ 2025-09-26 08:11:13,396 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 90/100 | Train Loss: 0.0199 | Val rms_score: 0.5222
102
+ 2025-09-26 08:11:27,579 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 91/100 | Train Loss: 0.0191 | Val rms_score: 0.5243
103
+ 2025-09-26 08:11:41,861 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 92/100 | Train Loss: 0.0181 | Val rms_score: 0.5216
104
+ 2025-09-26 08:11:55,103 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 93/100 | Train Loss: 0.0191 | Val rms_score: 0.5222
105
+ 2025-09-26 08:12:09,330 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 94/100 | Train Loss: 0.0191 | Val rms_score: 0.5219
106
+ 2025-09-26 08:12:23,490 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 95/100 | Train Loss: 0.0181 | Val rms_score: 0.5227
107
+ 2025-09-26 08:12:37,843 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 96/100 | Train Loss: 0.0199 | Val rms_score: 0.5274
108
+ 2025-09-26 08:12:52,358 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 97/100 | Train Loss: 0.0180 | Val rms_score: 0.5247
109
+ 2025-09-26 08:13:05,356 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 98/100 | Train Loss: 0.0190 | Val rms_score: 0.5246
110
+ 2025-09-26 08:13:19,068 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 99/100 | Train Loss: 0.0183 | Val rms_score: 0.5200
111
+ 2025-09-26 08:13:32,371 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 100/100 | Train Loss: 0.0184 | Val rms_score: 0.5243
112
+ 2025-09-26 08:13:33,579 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Test rms_score: 0.4670
113
+ 2025-09-26 08:13:33,898 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Starting triplicate run 2 for dataset adme_microsom_stab_r at 2025-09-26_08-13-33
114
+ 2025-09-26 08:13:47,238 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 1/100 | Train Loss: 0.8456 | Val rms_score: 0.5504
115
+ 2025-09-26 08:13:47,238 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Global step of best model: 68
116
+ 2025-09-26 08:13:47,980 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Best model saved at epoch 1 with val rms_score: 0.5504
117
+ 2025-09-26 08:14:00,268 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 2/100 | Train Loss: 0.5903 | Val rms_score: 0.5307
118
+ 2025-09-26 08:14:00,476 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Global step of best model: 136
119
+ 2025-09-26 08:14:01,086 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Best model saved at epoch 2 with val rms_score: 0.5307
120
+ 2025-09-26 08:14:14,250 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 3/100 | Train Loss: 0.4746 | Val rms_score: 0.5606
121
+ 2025-09-26 08:14:26,792 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 4/100 | Train Loss: 0.3897 | Val rms_score: 0.5281
122
+ 2025-09-26 08:14:26,960 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Global step of best model: 272
123
+ 2025-09-26 08:14:27,539 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Best model saved at epoch 4 with val rms_score: 0.5281
124
+ 2025-09-26 08:14:40,979 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 5/100 | Train Loss: 0.3531 | Val rms_score: 0.5170
125
+ 2025-09-26 08:14:41,171 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Global step of best model: 340
126
+ 2025-09-26 08:14:41,813 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Best model saved at epoch 5 with val rms_score: 0.5170
127
+ 2025-09-26 08:14:54,640 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 6/100 | Train Loss: 0.2617 | Val rms_score: 0.5146
128
+ 2025-09-26 08:14:55,103 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Global step of best model: 408
129
+ 2025-09-26 08:14:55,717 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Best model saved at epoch 6 with val rms_score: 0.5146
130
+ 2025-09-26 08:15:09,006 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 7/100 | Train Loss: 0.2114 | Val rms_score: 0.5182
131
+ 2025-09-26 08:15:23,023 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 8/100 | Train Loss: 0.1655 | Val rms_score: 0.5256
132
+ 2025-09-26 08:15:36,324 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 9/100 | Train Loss: 0.1322 | Val rms_score: 0.5307
133
+ 2025-09-26 08:15:50,481 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 10/100 | Train Loss: 0.1176 | Val rms_score: 0.5289
134
+ 2025-09-26 08:16:04,395 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 11/100 | Train Loss: 0.1094 | Val rms_score: 0.5171
135
+ 2025-09-26 08:16:18,184 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 12/100 | Train Loss: 0.0835 | Val rms_score: 0.5211
136
+ 2025-09-26 08:16:31,565 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 13/100 | Train Loss: 0.0744 | Val rms_score: 0.5353
137
+ 2025-09-26 08:16:45,387 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 14/100 | Train Loss: 0.0688 | Val rms_score: 0.5441
138
+ 2025-09-26 08:17:00,199 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 15/100 | Train Loss: 0.0668 | Val rms_score: 0.5437
139
+ 2025-09-26 08:17:13,887 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 16/100 | Train Loss: 0.0630 | Val rms_score: 0.5217
140
+ 2025-09-26 08:17:28,020 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 17/100 | Train Loss: 0.0539 | Val rms_score: 0.5301
141
+ 2025-09-26 08:17:41,161 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 18/100 | Train Loss: 0.0531 | Val rms_score: 0.5297
142
+ 2025-09-26 08:17:54,334 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 19/100 | Train Loss: 0.0480 | Val rms_score: 0.5269
143
+ 2025-09-26 08:18:07,700 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 20/100 | Train Loss: 0.0453 | Val rms_score: 0.5358
144
+ 2025-09-26 08:18:21,017 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 21/100 | Train Loss: 0.0424 | Val rms_score: 0.5208
145
+ 2025-09-26 08:18:34,962 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 22/100 | Train Loss: 0.0398 | Val rms_score: 0.5333
146
+ 2025-09-26 08:18:48,738 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 23/100 | Train Loss: 0.0388 | Val rms_score: 0.5210
147
+ 2025-09-26 08:19:01,785 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 24/100 | Train Loss: 0.0359 | Val rms_score: 0.5265
148
+ 2025-09-26 08:19:16,082 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 25/100 | Train Loss: 0.0365 | Val rms_score: 0.5346
149
+ 2025-09-26 08:19:30,711 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 26/100 | Train Loss: 0.0333 | Val rms_score: 0.5359
150
+ 2025-09-26 08:19:48,545 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 27/100 | Train Loss: 0.0321 | Val rms_score: 0.5362
151
+ 2025-09-26 08:20:05,951 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 28/100 | Train Loss: 0.0371 | Val rms_score: 0.5292
152
+ 2025-09-26 08:20:23,134 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 29/100 | Train Loss: 0.0312 | Val rms_score: 0.5398
153
+ 2025-09-26 08:20:42,252 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 30/100 | Train Loss: 0.0352 | Val rms_score: 0.5301
154
+ 2025-09-26 08:20:57,287 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 31/100 | Train Loss: 0.0383 | Val rms_score: 0.5377
155
+ 2025-09-26 08:21:15,174 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 32/100 | Train Loss: 0.0315 | Val rms_score: 0.5342
156
+ 2025-09-26 08:21:32,279 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 33/100 | Train Loss: 0.0302 | Val rms_score: 0.5284
157
+ 2025-09-26 08:21:49,497 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 34/100 | Train Loss: 0.0269 | Val rms_score: 0.5344
158
+ 2025-09-26 08:22:07,177 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 35/100 | Train Loss: 0.0283 | Val rms_score: 0.5291
159
+ 2025-09-26 08:22:25,018 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 36/100 | Train Loss: 0.0278 | Val rms_score: 0.5225
160
+ 2025-09-26 08:22:42,985 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 37/100 | Train Loss: 0.0288 | Val rms_score: 0.5305
161
+ 2025-09-26 08:22:59,983 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 38/100 | Train Loss: 0.0280 | Val rms_score: 0.5295
162
+ 2025-09-26 08:23:17,434 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 39/100 | Train Loss: 0.0270 | Val rms_score: 0.5299
163
+ 2025-09-26 08:23:35,063 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 40/100 | Train Loss: 0.0279 | Val rms_score: 0.5281
164
+ 2025-09-26 08:23:52,517 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 41/100 | Train Loss: 0.0248 | Val rms_score: 0.5305
165
+ 2025-09-26 08:24:10,440 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 42/100 | Train Loss: 0.0248 | Val rms_score: 0.5320
166
+ 2025-09-26 08:24:27,315 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 43/100 | Train Loss: 0.0233 | Val rms_score: 0.5323
167
+ 2025-09-26 08:24:44,991 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 44/100 | Train Loss: 0.0234 | Val rms_score: 0.5269
168
+ 2025-09-26 08:25:03,478 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 45/100 | Train Loss: 0.0253 | Val rms_score: 0.5365
169
+ 2025-09-26 08:25:21,022 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 46/100 | Train Loss: 0.0240 | Val rms_score: 0.5331
170
+ 2025-09-26 08:25:39,178 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 47/100 | Train Loss: 0.0231 | Val rms_score: 0.5328
171
+ 2025-09-26 08:25:55,001 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 48/100 | Train Loss: 0.0232 | Val rms_score: 0.5271
172
+ 2025-09-26 08:26:12,545 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 49/100 | Train Loss: 0.0215 | Val rms_score: 0.5328
173
+ 2025-09-26 08:26:30,359 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 50/100 | Train Loss: 0.0230 | Val rms_score: 0.5287
174
+ 2025-09-26 08:26:48,324 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 51/100 | Train Loss: 0.0223 | Val rms_score: 0.5293
175
+ 2025-09-26 08:27:06,218 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 52/100 | Train Loss: 0.0231 | Val rms_score: 0.5345
176
+ 2025-09-26 08:27:23,553 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 53/100 | Train Loss: 0.0210 | Val rms_score: 0.5351
177
+ 2025-09-26 08:27:39,275 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 54/100 | Train Loss: 0.0221 | Val rms_score: 0.5318
178
+ 2025-09-26 08:27:56,757 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 55/100 | Train Loss: 0.0211 | Val rms_score: 0.5351
179
+ 2025-09-26 08:28:14,498 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 56/100 | Train Loss: 0.0176 | Val rms_score: 0.5275
180
+ 2025-09-26 08:28:31,026 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 57/100 | Train Loss: 0.0219 | Val rms_score: 0.5329
181
+ 2025-09-26 08:28:48,682 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 58/100 | Train Loss: 0.0212 | Val rms_score: 0.5303
182
+ 2025-09-26 08:29:07,560 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 59/100 | Train Loss: 0.0212 | Val rms_score: 0.5284
183
+ 2025-09-26 08:29:23,960 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 60/100 | Train Loss: 0.0223 | Val rms_score: 0.5278
184
+ 2025-09-26 08:29:39,738 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 61/100 | Train Loss: 0.0215 | Val rms_score: 0.5297
185
+ 2025-09-26 08:29:57,869 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 62/100 | Train Loss: 0.0203 | Val rms_score: 0.5367
186
+ 2025-09-26 08:30:15,150 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 63/100 | Train Loss: 0.0193 | Val rms_score: 0.5252
187
+ 2025-09-26 08:30:32,713 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 64/100 | Train Loss: 0.0192 | Val rms_score: 0.5316
188
+ 2025-09-26 08:30:50,915 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 65/100 | Train Loss: 0.0207 | Val rms_score: 0.5222
189
+ 2025-09-26 08:31:08,013 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 66/100 | Train Loss: 0.0200 | Val rms_score: 0.5308
190
+ 2025-09-26 08:31:25,836 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 67/100 | Train Loss: 0.0205 | Val rms_score: 0.5323
191
+ 2025-09-26 08:31:43,391 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 68/100 | Train Loss: 0.0206 | Val rms_score: 0.5289
192
+ 2025-09-26 08:32:01,135 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 69/100 | Train Loss: 0.0202 | Val rms_score: 0.5262
193
+ 2025-09-26 08:32:19,002 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 70/100 | Train Loss: 0.0201 | Val rms_score: 0.5284
194
+ 2025-09-26 08:32:36,986 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 71/100 | Train Loss: 0.0197 | Val rms_score: 0.5290
195
+ 2025-09-26 08:32:53,408 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 72/100 | Train Loss: 0.0187 | Val rms_score: 0.5311
196
+ 2025-09-26 08:33:10,705 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 73/100 | Train Loss: 0.0193 | Val rms_score: 0.5298
197
+ 2025-09-26 08:33:29,636 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 74/100 | Train Loss: 0.0193 | Val rms_score: 0.5285
198
+ 2025-09-26 08:33:47,023 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 75/100 | Train Loss: 0.0196 | Val rms_score: 0.5290
199
+ 2025-09-26 08:34:04,596 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 76/100 | Train Loss: 0.0192 | Val rms_score: 0.5272
200
+ 2025-09-26 08:34:21,142 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 77/100 | Train Loss: 0.0181 | Val rms_score: 0.5253
201
+ 2025-09-26 08:34:38,573 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 78/100 | Train Loss: 0.0200 | Val rms_score: 0.5299
202
+ 2025-09-26 08:34:56,572 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 79/100 | Train Loss: 0.0178 | Val rms_score: 0.5258
203
+ 2025-09-26 08:35:14,564 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 80/100 | Train Loss: 0.0181 | Val rms_score: 0.5231
204
+ 2025-09-26 08:35:32,503 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 81/100 | Train Loss: 0.0172 | Val rms_score: 0.5260
205
+ 2025-09-26 08:35:50,006 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 82/100 | Train Loss: 0.0185 | Val rms_score: 0.5301
206
+ 2025-09-26 08:36:07,864 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 83/100 | Train Loss: 0.0188 | Val rms_score: 0.5261
207
+ 2025-09-26 08:36:25,892 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 84/100 | Train Loss: 0.0176 | Val rms_score: 0.5298
208
+ 2025-09-26 08:36:43,315 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 85/100 | Train Loss: 0.0183 | Val rms_score: 0.5284
209
+ 2025-09-26 08:37:00,998 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 86/100 | Train Loss: 0.0172 | Val rms_score: 0.5322
210
+ 2025-09-26 08:37:17,949 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 87/100 | Train Loss: 0.0190 | Val rms_score: 0.5292
211
+ 2025-09-26 08:37:34,979 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 88/100 | Train Loss: 0.0170 | Val rms_score: 0.5288
212
+ 2025-09-26 08:37:53,566 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 89/100 | Train Loss: 0.0183 | Val rms_score: 0.5265
213
+ 2025-09-26 08:38:09,259 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 90/100 | Train Loss: 0.0170 | Val rms_score: 0.5293
214
+ 2025-09-26 08:38:26,653 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 91/100 | Train Loss: 0.0172 | Val rms_score: 0.5297
215
+ 2025-09-26 08:38:43,805 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 92/100 | Train Loss: 0.0190 | Val rms_score: 0.5298
216
+ 2025-09-26 08:39:01,208 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 93/100 | Train Loss: 0.0161 | Val rms_score: 0.5286
217
+ 2025-09-26 08:39:18,826 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 94/100 | Train Loss: 0.0170 | Val rms_score: 0.5310
218
+ 2025-09-26 08:39:36,748 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 95/100 | Train Loss: 0.0181 | Val rms_score: 0.5296
219
+ 2025-09-26 08:39:54,606 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 96/100 | Train Loss: 0.0160 | Val rms_score: 0.5289
220
+ 2025-09-26 08:40:11,401 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 97/100 | Train Loss: 0.0165 | Val rms_score: 0.5271
221
+ 2025-09-26 08:40:28,711 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 98/100 | Train Loss: 0.0156 | Val rms_score: 0.5292
222
+ 2025-09-26 08:40:46,073 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 99/100 | Train Loss: 0.0175 | Val rms_score: 0.5267
223
+ 2025-09-26 08:41:03,880 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 100/100 | Train Loss: 0.0176 | Val rms_score: 0.5273
224
+ 2025-09-26 08:41:05,031 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Test rms_score: 0.4448
225
+ 2025-09-26 08:41:05,337 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Starting triplicate run 3 for dataset adme_microsom_stab_r at 2025-09-26_08-41-05
226
+ 2025-09-26 08:41:21,826 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 1/100 | Train Loss: 0.8676 | Val rms_score: 0.5588
227
+ 2025-09-26 08:41:21,826 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Global step of best model: 68
228
+ 2025-09-26 08:41:22,452 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Best model saved at epoch 1 with val rms_score: 0.5588
229
+ 2025-09-26 08:41:37,456 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 2/100 | Train Loss: 0.5660 | Val rms_score: 0.5203
230
+ 2025-09-26 08:41:37,601 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Global step of best model: 136
231
+ 2025-09-26 08:41:38,144 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Best model saved at epoch 2 with val rms_score: 0.5203
232
+ 2025-09-26 08:41:54,391 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 3/100 | Train Loss: 0.4727 | Val rms_score: 0.5236
233
+ 2025-09-26 08:42:11,752 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 4/100 | Train Loss: 0.4062 | Val rms_score: 0.5197
234
+ 2025-09-26 08:42:11,902 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Global step of best model: 272
235
+ 2025-09-26 08:42:12,483 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Best model saved at epoch 4 with val rms_score: 0.5197
236
+ 2025-09-26 08:42:29,682 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 5/100 | Train Loss: 0.3094 | Val rms_score: 0.5210
237
+ 2025-09-26 08:42:46,467 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 6/100 | Train Loss: 0.2520 | Val rms_score: 0.5234
238
+ 2025-09-26 08:43:03,065 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 7/100 | Train Loss: 0.2114 | Val rms_score: 0.5152
239
+ 2025-09-26 08:43:03,232 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Global step of best model: 476
240
+ 2025-09-26 08:43:03,805 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Best model saved at epoch 7 with val rms_score: 0.5152
241
+ 2025-09-26 08:43:21,736 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 8/100 | Train Loss: 0.1832 | Val rms_score: 0.5283
242
+ 2025-09-26 08:43:39,435 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 9/100 | Train Loss: 0.1810 | Val rms_score: 0.5119
243
+ 2025-09-26 08:43:39,588 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Global step of best model: 612
244
+ 2025-09-26 08:43:40,165 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Best model saved at epoch 9 with val rms_score: 0.5119
245
+ 2025-09-26 08:43:57,869 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 10/100 | Train Loss: 0.1333 | Val rms_score: 0.5309
246
+ 2025-09-26 08:44:16,957 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 11/100 | Train Loss: 0.1055 | Val rms_score: 0.5248
247
+ 2025-09-26 08:44:33,275 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 12/100 | Train Loss: 0.0913 | Val rms_score: 0.5288
248
+ 2025-09-26 08:44:49,163 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 13/100 | Train Loss: 0.0818 | Val rms_score: 0.5277
249
+ 2025-09-26 08:45:06,713 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 14/100 | Train Loss: 0.0718 | Val rms_score: 0.5240
250
+ 2025-09-26 08:45:24,965 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 15/100 | Train Loss: 0.0863 | Val rms_score: 0.5157
251
+ 2025-09-26 08:45:42,332 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 16/100 | Train Loss: 0.0685 | Val rms_score: 0.5322
252
+ 2025-09-26 08:46:00,238 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 17/100 | Train Loss: 0.0592 | Val rms_score: 0.5259
253
+ 2025-09-26 08:46:15,978 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 18/100 | Train Loss: 0.0505 | Val rms_score: 0.5246
254
+ 2025-09-26 08:46:33,422 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 19/100 | Train Loss: 0.0540 | Val rms_score: 0.5265
255
+ 2025-09-26 08:46:50,704 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 20/100 | Train Loss: 0.0518 | Val rms_score: 0.5308
256
+ 2025-09-26 08:47:08,351 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 21/100 | Train Loss: 0.0474 | Val rms_score: 0.5287
257
+ 2025-09-26 08:47:26,172 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 22/100 | Train Loss: 0.0439 | Val rms_score: 0.5196
258
+ 2025-09-26 08:47:43,528 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 23/100 | Train Loss: 0.0449 | Val rms_score: 0.5316
259
+ 2025-09-26 08:48:00,835 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 24/100 | Train Loss: 0.0378 | Val rms_score: 0.5257
260
+ 2025-09-26 08:48:18,122 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 25/100 | Train Loss: 0.0414 | Val rms_score: 0.5267
261
+ 2025-09-26 08:48:34,174 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 26/100 | Train Loss: 0.0377 | Val rms_score: 0.5262
262
+ 2025-09-26 08:48:52,222 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 27/100 | Train Loss: 0.0382 | Val rms_score: 0.5212
263
+ 2025-09-26 08:49:09,538 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 28/100 | Train Loss: 0.0325 | Val rms_score: 0.5273
264
+ 2025-09-26 08:49:27,338 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 29/100 | Train Loss: 0.0375 | Val rms_score: 0.5325
265
+ 2025-09-26 08:49:45,557 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 30/100 | Train Loss: 0.0357 | Val rms_score: 0.5408
266
+ 2025-09-26 08:50:00,976 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 31/100 | Train Loss: 0.0366 | Val rms_score: 0.5233
267
+ 2025-09-26 08:50:18,109 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 32/100 | Train Loss: 0.0315 | Val rms_score: 0.5236
268
+ 2025-09-26 08:50:35,661 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 33/100 | Train Loss: 0.0314 | Val rms_score: 0.5257
269
+ 2025-09-26 08:50:53,537 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 34/100 | Train Loss: 0.0291 | Val rms_score: 0.5302
270
+ 2025-09-26 08:51:11,588 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 35/100 | Train Loss: 0.0308 | Val rms_score: 0.5265
271
+ 2025-09-26 08:51:28,154 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 36/100 | Train Loss: 0.0303 | Val rms_score: 0.5249
272
+ 2025-09-26 08:51:45,282 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 37/100 | Train Loss: 0.0344 | Val rms_score: 0.5365
273
+ 2025-09-26 08:52:02,765 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 38/100 | Train Loss: 0.0315 | Val rms_score: 0.5270
274
+ 2025-09-26 08:52:20,492 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 39/100 | Train Loss: 0.0296 | Val rms_score: 0.5225
275
+ 2025-09-26 08:52:38,305 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 40/100 | Train Loss: 0.0277 | Val rms_score: 0.5274
276
+ 2025-09-26 08:52:54,849 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 41/100 | Train Loss: 0.0294 | Val rms_score: 0.5230
277
+ 2025-09-26 08:53:11,686 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 42/100 | Train Loss: 0.0259 | Val rms_score: 0.5227
278
+ 2025-09-26 08:53:28,401 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 43/100 | Train Loss: 0.0273 | Val rms_score: 0.5291
279
+ 2025-09-26 08:53:45,904 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 44/100 | Train Loss: 0.0281 | Val rms_score: 0.5237
280
+ 2025-09-26 08:54:04,771 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 45/100 | Train Loss: 0.0246 | Val rms_score: 0.5255
281
+ 2025-09-26 08:54:21,140 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 46/100 | Train Loss: 0.0244 | Val rms_score: 0.5229
282
+ 2025-09-26 08:54:37,994 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 47/100 | Train Loss: 0.0252 | Val rms_score: 0.5267
283
+ 2025-09-26 08:54:55,214 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 48/100 | Train Loss: 0.0247 | Val rms_score: 0.5290
284
+ 2025-09-26 08:55:12,657 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 49/100 | Train Loss: 0.0264 | Val rms_score: 0.5232
285
+ 2025-09-26 08:55:30,614 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 50/100 | Train Loss: 0.0267 | Val rms_score: 0.5308
286
+ 2025-09-26 08:55:48,210 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 51/100 | Train Loss: 0.0244 | Val rms_score: 0.5287
287
+ 2025-09-26 08:56:04,872 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 52/100 | Train Loss: 0.0237 | Val rms_score: 0.5288
288
+ 2025-09-26 08:56:22,017 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 53/100 | Train Loss: 0.0238 | Val rms_score: 0.5264
289
+ 2025-09-26 08:56:39,820 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 54/100 | Train Loss: 0.0234 | Val rms_score: 0.5190
290
+ 2025-09-26 08:56:57,333 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 55/100 | Train Loss: 0.0227 | Val rms_score: 0.5236
291
+ 2025-09-26 08:57:14,754 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 56/100 | Train Loss: 0.0205 | Val rms_score: 0.5249
292
+ 2025-09-26 08:57:33,134 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 57/100 | Train Loss: 0.0236 | Val rms_score: 0.5260
293
+ 2025-09-26 08:57:48,452 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 58/100 | Train Loss: 0.0224 | Val rms_score: 0.5217
294
+ 2025-09-26 08:58:06,239 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 59/100 | Train Loss: 0.0246 | Val rms_score: 0.5258
295
+ 2025-09-26 08:58:24,014 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 60/100 | Train Loss: 0.0233 | Val rms_score: 0.5267
296
+ 2025-09-26 08:58:41,554 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 61/100 | Train Loss: 0.0218 | Val rms_score: 0.5221
297
+ 2025-09-26 08:58:57,345 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 62/100 | Train Loss: 0.0203 | Val rms_score: 0.5279
298
+ 2025-09-26 08:59:14,753 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 63/100 | Train Loss: 0.0219 | Val rms_score: 0.5239
299
+ 2025-09-26 08:59:32,222 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 64/100 | Train Loss: 0.0222 | Val rms_score: 0.5236
300
+ 2025-09-26 08:59:50,103 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 65/100 | Train Loss: 0.0221 | Val rms_score: 0.5185
301
+ 2025-09-26 09:00:07,509 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 66/100 | Train Loss: 0.0219 | Val rms_score: 0.5271
302
+ 2025-09-26 09:00:24,288 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 67/100 | Train Loss: 0.0215 | Val rms_score: 0.5233
303
+ 2025-09-26 09:00:41,681 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 68/100 | Train Loss: 0.0234 | Val rms_score: 0.5189
304
+ 2025-09-26 09:00:59,099 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 69/100 | Train Loss: 0.0217 | Val rms_score: 0.5232
305
+ 2025-09-26 09:01:16,522 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 70/100 | Train Loss: 0.0219 | Val rms_score: 0.5250
306
+ 2025-09-26 09:01:34,322 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 71/100 | Train Loss: 0.0201 | Val rms_score: 0.5243
307
+ 2025-09-26 09:01:51,144 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 72/100 | Train Loss: 0.0209 | Val rms_score: 0.5221
308
+ 2025-09-26 09:02:08,432 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 73/100 | Train Loss: 0.0201 | Val rms_score: 0.5272
309
+ 2025-09-26 09:02:26,773 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 74/100 | Train Loss: 0.0216 | Val rms_score: 0.5203
310
+ 2025-09-26 09:02:44,046 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 75/100 | Train Loss: 0.0206 | Val rms_score: 0.5233
311
+ 2025-09-26 09:03:01,586 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 76/100 | Train Loss: 0.0182 | Val rms_score: 0.5227
312
+ 2025-09-26 09:03:19,227 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 77/100 | Train Loss: 0.0211 | Val rms_score: 0.5230
313
+ 2025-09-26 09:03:36,411 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 78/100 | Train Loss: 0.0204 | Val rms_score: 0.5252
314
+ 2025-09-26 09:03:53,747 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 79/100 | Train Loss: 0.0203 | Val rms_score: 0.5213
315
+ 2025-09-26 09:04:11,593 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 80/100 | Train Loss: 0.0198 | Val rms_score: 0.5238
316
+ 2025-09-26 09:04:28,951 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 81/100 | Train Loss: 0.0220 | Val rms_score: 0.5212
317
+ 2025-09-26 09:04:46,050 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 82/100 | Train Loss: 0.0200 | Val rms_score: 0.5221
318
+ 2025-09-26 09:05:02,856 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 83/100 | Train Loss: 0.0199 | Val rms_score: 0.5222
319
+ 2025-09-26 09:05:20,114 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 84/100 | Train Loss: 0.0216 | Val rms_score: 0.5211
320
+ 2025-09-26 09:05:37,665 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 85/100 | Train Loss: 0.0195 | Val rms_score: 0.5200
321
+ 2025-09-26 09:05:55,496 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 86/100 | Train Loss: 0.0196 | Val rms_score: 0.5245
322
+ 2025-09-26 09:06:12,202 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 87/100 | Train Loss: 0.0197 | Val rms_score: 0.5295
323
+ 2025-09-26 09:06:28,571 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 88/100 | Train Loss: 0.0185 | Val rms_score: 0.5249
324
+ 2025-09-26 09:06:47,008 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 89/100 | Train Loss: 0.0195 | Val rms_score: 0.5170
325
+ 2025-09-26 09:07:04,270 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 90/100 | Train Loss: 0.0190 | Val rms_score: 0.5288
326
+ 2025-09-26 09:07:21,764 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 91/100 | Train Loss: 0.0192 | Val rms_score: 0.5232
327
+ 2025-09-26 09:07:39,515 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 92/100 | Train Loss: 0.0202 | Val rms_score: 0.5244
328
+ 2025-09-26 09:07:56,804 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 93/100 | Train Loss: 0.0180 | Val rms_score: 0.5197
329
+ 2025-09-26 09:08:13,899 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 94/100 | Train Loss: 0.0190 | Val rms_score: 0.5243
330
+ 2025-09-26 09:08:31,565 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 95/100 | Train Loss: 0.0187 | Val rms_score: 0.5222
331
+ 2025-09-26 09:08:48,688 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 96/100 | Train Loss: 0.0183 | Val rms_score: 0.5228
332
+ 2025-09-26 09:09:06,226 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 97/100 | Train Loss: 0.0185 | Val rms_score: 0.5207
333
+ 2025-09-26 09:09:21,290 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 98/100 | Train Loss: 0.0200 | Val rms_score: 0.5218
334
+ 2025-09-26 09:09:38,247 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 99/100 | Train Loss: 0.0195 | Val rms_score: 0.5224
335
+ 2025-09-26 09:09:55,751 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 100/100 | Train Loss: 0.0170 | Val rms_score: 0.5229
336
+ 2025-09-26 09:09:56,607 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Test rms_score: 0.4587
337
+ 2025-09-26 09:09:56,967 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Final Triplicate Test Results — Avg rms_score: 0.4568, Std Dev: 0.0091
logs_modchembert_regression_ModChemBERT-MLM-DAPT/modchembert_deepchem_splits_run_adme_permeability_epochs100_batch_size32_20250926_090956.log ADDED
@@ -0,0 +1,365 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ 2025-09-26 09:09:56,969 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Running benchmark for dataset: adme_permeability
2
+ 2025-09-26 09:09:56,969 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - dataset: adme_permeability, tasks: ['y'], epochs: 100, learning rate: 3e-05, transform: True
3
+ 2025-09-26 09:09:56,974 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Starting triplicate run 1 for dataset adme_permeability at 2025-09-26_09-09-56
4
+ 2025-09-26 09:10:13,275 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 1/100 | Train Loss: 0.6642 | Val rms_score: 0.4818
5
+ 2025-09-26 09:10:13,276 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Global step of best model: 67
6
+ 2025-09-26 09:10:14,056 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Best model saved at epoch 1 with val rms_score: 0.4818
7
+ 2025-09-26 09:10:31,221 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 2/100 | Train Loss: 0.4210 | Val rms_score: 0.4340
8
+ 2025-09-26 09:10:31,398 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Global step of best model: 134
9
+ 2025-09-26 09:10:31,941 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Best model saved at epoch 2 with val rms_score: 0.4340
10
+ 2025-09-26 09:10:48,220 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 3/100 | Train Loss: 0.0245 | Val rms_score: 0.4298
11
+ 2025-09-26 09:10:48,395 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Global step of best model: 201
12
+ 2025-09-26 09:10:48,926 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Best model saved at epoch 3 with val rms_score: 0.4298
13
+ 2025-09-26 09:11:04,042 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 4/100 | Train Loss: 0.3209 | Val rms_score: 0.4270
14
+ 2025-09-26 09:11:04,231 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Global step of best model: 268
15
+ 2025-09-26 09:11:04,772 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Best model saved at epoch 4 with val rms_score: 0.4270
16
+ 2025-09-26 09:11:21,941 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 5/100 | Train Loss: 0.2446 | Val rms_score: 0.4318
17
+ 2025-09-26 09:11:38,520 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 6/100 | Train Loss: 0.1118 | Val rms_score: 0.3935
18
+ 2025-09-26 09:11:38,972 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Global step of best model: 402
19
+ 2025-09-26 09:11:39,552 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Best model saved at epoch 6 with val rms_score: 0.3935
20
+ 2025-09-26 09:11:56,598 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 7/100 | Train Loss: 0.1754 | Val rms_score: 0.4118
21
+ 2025-09-26 09:12:12,198 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 8/100 | Train Loss: 0.1319 | Val rms_score: 0.4023
22
+ 2025-09-26 09:12:28,918 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 9/100 | Train Loss: 0.0977 | Val rms_score: 0.4329
23
+ 2025-09-26 09:12:44,986 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 10/100 | Train Loss: 0.1189 | Val rms_score: 0.4039
24
+ 2025-09-26 09:13:01,902 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 11/100 | Train Loss: 0.1022 | Val rms_score: 0.4115
25
+ 2025-09-26 09:13:19,182 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 12/100 | Train Loss: 0.0579 | Val rms_score: 0.4096
26
+ 2025-09-26 09:13:36,001 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 13/100 | Train Loss: 0.0690 | Val rms_score: 0.4029
27
+ 2025-09-26 09:13:53,701 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 14/100 | Train Loss: 0.0748 | Val rms_score: 0.4851
28
+ 2025-09-26 09:14:12,230 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 15/100 | Train Loss: 0.0957 | Val rms_score: 0.4092
29
+ 2025-09-26 09:14:29,605 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 16/100 | Train Loss: 0.0723 | Val rms_score: 0.4170
30
+ 2025-09-26 09:14:46,188 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 17/100 | Train Loss: 0.0677 | Val rms_score: 0.4108
31
+ 2025-09-26 09:15:03,527 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 18/100 | Train Loss: 0.0508 | Val rms_score: 0.4052
32
+ 2025-09-26 09:15:21,233 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 19/100 | Train Loss: 0.0478 | Val rms_score: 0.3987
33
+ 2025-09-26 09:15:38,769 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 20/100 | Train Loss: 0.0445 | Val rms_score: 0.4051
34
+ 2025-09-26 09:15:55,872 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 21/100 | Train Loss: 0.0421 | Val rms_score: 0.4025
35
+ 2025-09-26 09:16:12,246 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 22/100 | Train Loss: 0.0375 | Val rms_score: 0.3987
36
+ 2025-09-26 09:16:29,768 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 23/100 | Train Loss: 0.0553 | Val rms_score: 0.4091
37
+ 2025-09-26 09:16:47,321 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 24/100 | Train Loss: 0.0427 | Val rms_score: 0.4086
38
+ 2025-09-26 09:17:04,210 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 25/100 | Train Loss: 0.0368 | Val rms_score: 0.4044
39
+ 2025-09-26 09:17:21,734 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 26/100 | Train Loss: 0.0327 | Val rms_score: 0.4122
40
+ 2025-09-26 09:17:39,020 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 27/100 | Train Loss: 0.0289 | Val rms_score: 0.4075
41
+ 2025-09-26 09:17:56,825 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 28/100 | Train Loss: 0.0322 | Val rms_score: 0.4057
42
+ 2025-09-26 09:18:14,251 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 29/100 | Train Loss: 0.0332 | Val rms_score: 0.4092
43
+ 2025-09-26 09:18:32,030 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 30/100 | Train Loss: 0.0285 | Val rms_score: 0.4064
44
+ 2025-09-26 09:18:48,769 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 31/100 | Train Loss: 0.0274 | Val rms_score: 0.4108
45
+ 2025-09-26 09:19:05,471 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 32/100 | Train Loss: 0.0281 | Val rms_score: 0.4118
46
+ 2025-09-26 09:19:22,772 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 33/100 | Train Loss: 0.0266 | Val rms_score: 0.4087
47
+ 2025-09-26 09:19:40,680 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 34/100 | Train Loss: 0.0253 | Val rms_score: 0.4053
48
+ 2025-09-26 09:19:57,697 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 35/100 | Train Loss: 0.0233 | Val rms_score: 0.4034
49
+ 2025-09-26 09:20:15,132 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 36/100 | Train Loss: 0.0233 | Val rms_score: 0.4047
50
+ 2025-09-26 09:20:32,801 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 37/100 | Train Loss: 0.0238 | Val rms_score: 0.4000
51
+ 2025-09-26 09:20:50,466 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 38/100 | Train Loss: 0.0260 | Val rms_score: 0.4030
52
+ 2025-09-26 09:21:08,027 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 39/100 | Train Loss: 0.0278 | Val rms_score: 0.4094
53
+ 2025-09-26 09:21:24,945 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 40/100 | Train Loss: 0.0266 | Val rms_score: 0.4012
54
+ 2025-09-26 09:21:42,120 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 41/100 | Train Loss: 0.0241 | Val rms_score: 0.4075
55
+ 2025-09-26 09:21:59,430 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 42/100 | Train Loss: 0.0193 | Val rms_score: 0.4069
56
+ 2025-09-26 09:22:16,671 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 43/100 | Train Loss: 0.0231 | Val rms_score: 0.4062
57
+ 2025-09-26 09:22:33,765 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 44/100 | Train Loss: 0.0203 | Val rms_score: 0.4069
58
+ 2025-09-26 09:22:50,392 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 45/100 | Train Loss: 0.0238 | Val rms_score: 0.4068
59
+ 2025-09-26 09:23:07,007 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 46/100 | Train Loss: 0.0218 | Val rms_score: 0.4089
60
+ 2025-09-26 09:23:24,404 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 47/100 | Train Loss: 0.0202 | Val rms_score: 0.4057
61
+ 2025-09-26 09:23:42,024 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 48/100 | Train Loss: 0.0183 | Val rms_score: 0.4086
62
+ 2025-09-26 09:24:00,058 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 49/100 | Train Loss: 0.0198 | Val rms_score: 0.4084
63
+ 2025-09-26 09:24:17,415 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 50/100 | Train Loss: 0.0197 | Val rms_score: 0.4068
64
+ 2025-09-26 09:24:35,366 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 51/100 | Train Loss: 0.0237 | Val rms_score: 0.4060
65
+ 2025-09-26 09:24:53,392 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 52/100 | Train Loss: 0.0204 | Val rms_score: 0.4009
66
+ 2025-09-26 09:25:10,764 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 53/100 | Train Loss: 0.0213 | Val rms_score: 0.4055
67
+ 2025-09-26 09:25:27,603 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 54/100 | Train Loss: 0.0216 | Val rms_score: 0.4086
68
+ 2025-09-26 09:25:44,603 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 55/100 | Train Loss: 0.0247 | Val rms_score: 0.4073
69
+ 2025-09-26 09:26:02,636 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 56/100 | Train Loss: 0.0218 | Val rms_score: 0.4098
70
+ 2025-09-26 09:26:20,450 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 57/100 | Train Loss: 0.0219 | Val rms_score: 0.4082
71
+ 2025-09-26 09:26:37,925 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 58/100 | Train Loss: 0.0181 | Val rms_score: 0.4045
72
+ 2025-09-26 09:26:55,211 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 59/100 | Train Loss: 0.0189 | Val rms_score: 0.4042
73
+ 2025-09-26 09:27:13,983 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 60/100 | Train Loss: 0.0156 | Val rms_score: 0.4040
74
+ 2025-09-26 09:27:31,511 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 61/100 | Train Loss: 0.0177 | Val rms_score: 0.4070
75
+ 2025-09-26 09:27:47,812 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 62/100 | Train Loss: 0.0169 | Val rms_score: 0.4056
76
+ 2025-09-26 09:28:05,016 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 63/100 | Train Loss: 0.0161 | Val rms_score: 0.4051
77
+ 2025-09-26 09:28:22,135 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 64/100 | Train Loss: 0.0163 | Val rms_score: 0.4040
78
+ 2025-09-26 09:28:39,143 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 65/100 | Train Loss: 0.0161 | Val rms_score: 0.4043
79
+ 2025-09-26 09:28:55,860 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 66/100 | Train Loss: 0.0168 | Val rms_score: 0.4045
80
+ 2025-09-26 09:29:13,802 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 67/100 | Train Loss: 0.0164 | Val rms_score: 0.4063
81
+ 2025-09-26 09:29:30,898 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 68/100 | Train Loss: 0.0157 | Val rms_score: 0.4042
82
+ 2025-09-26 09:29:48,672 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 69/100 | Train Loss: 0.0185 | Val rms_score: 0.4098
83
+ 2025-09-26 09:30:04,598 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 70/100 | Train Loss: 0.0296 | Val rms_score: 0.4069
84
+ 2025-09-26 09:30:21,673 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 71/100 | Train Loss: 0.0189 | Val rms_score: 0.4065
85
+ 2025-09-26 09:30:37,596 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 72/100 | Train Loss: 0.0164 | Val rms_score: 0.4052
86
+ 2025-09-26 09:30:55,197 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 73/100 | Train Loss: 0.0185 | Val rms_score: 0.4073
87
+ 2025-09-26 09:31:12,843 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 74/100 | Train Loss: 0.0165 | Val rms_score: 0.4052
88
+ 2025-09-26 09:31:31,177 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 75/100 | Train Loss: 0.0175 | Val rms_score: 0.4043
89
+ 2025-09-26 09:31:47,998 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 76/100 | Train Loss: 0.0161 | Val rms_score: 0.4053
90
+ 2025-09-26 09:32:05,790 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 77/100 | Train Loss: 0.0168 | Val rms_score: 0.4044
91
+ 2025-09-26 09:32:23,583 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 78/100 | Train Loss: 0.0158 | Val rms_score: 0.4021
92
+ 2025-09-26 09:32:40,548 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 79/100 | Train Loss: 0.0155 | Val rms_score: 0.4063
93
+ 2025-09-26 09:32:58,543 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 80/100 | Train Loss: 0.0167 | Val rms_score: 0.4016
94
+ 2025-09-26 09:33:16,879 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 81/100 | Train Loss: 0.0157 | Val rms_score: 0.3999
95
+ 2025-09-26 09:33:35,103 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 82/100 | Train Loss: 0.0155 | Val rms_score: 0.4030
96
+ 2025-09-26 09:33:53,361 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 83/100 | Train Loss: 0.0155 | Val rms_score: 0.4050
97
+ 2025-09-26 09:34:11,246 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 84/100 | Train Loss: 0.0158 | Val rms_score: 0.4031
98
+ 2025-09-26 09:34:29,121 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 85/100 | Train Loss: 0.0148 | Val rms_score: 0.4028
99
+ 2025-09-26 09:34:45,202 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 86/100 | Train Loss: 0.0153 | Val rms_score: 0.4047
100
+ 2025-09-26 09:35:03,524 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 87/100 | Train Loss: 0.0147 | Val rms_score: 0.4026
101
+ 2025-09-26 09:35:21,128 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 88/100 | Train Loss: 0.0146 | Val rms_score: 0.4051
102
+ 2025-09-26 09:35:38,237 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 89/100 | Train Loss: 0.0149 | Val rms_score: 0.4053
103
+ 2025-09-26 09:35:56,883 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 90/100 | Train Loss: 0.0145 | Val rms_score: 0.4025
104
+ 2025-09-26 09:36:12,664 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 91/100 | Train Loss: 0.0153 | Val rms_score: 0.4046
105
+ 2025-09-26 09:36:30,476 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 92/100 | Train Loss: 0.0154 | Val rms_score: 0.4041
106
+ 2025-09-26 09:36:47,562 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 93/100 | Train Loss: 0.0152 | Val rms_score: 0.4044
107
+ 2025-09-26 09:37:05,481 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 94/100 | Train Loss: 0.0155 | Val rms_score: 0.4008
108
+ 2025-09-26 09:37:23,125 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 95/100 | Train Loss: 0.0147 | Val rms_score: 0.4020
109
+ 2025-09-26 09:37:39,591 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 96/100 | Train Loss: 0.0145 | Val rms_score: 0.4017
110
+ 2025-09-26 09:37:57,903 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 97/100 | Train Loss: 0.0144 | Val rms_score: 0.4009
111
+ 2025-09-26 09:38:14,450 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 98/100 | Train Loss: 0.0137 | Val rms_score: 0.4061
112
+ 2025-09-26 09:38:32,615 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 99/100 | Train Loss: 0.0149 | Val rms_score: 0.4053
113
+ 2025-09-26 09:38:50,254 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 100/100 | Train Loss: 0.0145 | Val rms_score: 0.4031
114
+ 2025-09-26 09:38:51,190 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Test rms_score: 0.5226
115
+ 2025-09-26 09:38:51,560 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Starting triplicate run 2 for dataset adme_permeability at 2025-09-26_09-38-51
116
+ 2025-09-26 09:39:08,088 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 1/100 | Train Loss: 0.6418 | Val rms_score: 0.4820
117
+ 2025-09-26 09:39:08,088 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Global step of best model: 67
118
+ 2025-09-26 09:39:08,680 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Best model saved at epoch 1 with val rms_score: 0.4820
119
+ 2025-09-26 09:39:26,708 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 2/100 | Train Loss: 0.4522 | Val rms_score: 0.4660
120
+ 2025-09-26 09:39:26,930 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Global step of best model: 134
121
+ 2025-09-26 09:39:27,614 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Best model saved at epoch 2 with val rms_score: 0.4660
122
+ 2025-09-26 09:39:45,014 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 3/100 | Train Loss: 0.1133 | Val rms_score: 0.4327
123
+ 2025-09-26 09:39:45,198 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Global step of best model: 201
124
+ 2025-09-26 09:39:45,780 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Best model saved at epoch 3 with val rms_score: 0.4327
125
+ 2025-09-26 09:40:01,682 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 4/100 | Train Loss: 0.2929 | Val rms_score: 0.4399
126
+ 2025-09-26 09:40:18,368 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 5/100 | Train Loss: 0.2661 | Val rms_score: 0.4373
127
+ 2025-09-26 09:40:36,341 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 6/100 | Train Loss: 0.1279 | Val rms_score: 0.4036
128
+ 2025-09-26 09:40:36,803 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Global step of best model: 402
129
+ 2025-09-26 09:40:37,368 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Best model saved at epoch 6 with val rms_score: 0.4036
130
+ 2025-09-26 09:40:54,107 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 7/100 | Train Loss: 0.1688 | Val rms_score: 0.4298
131
+ 2025-09-26 09:41:11,556 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 8/100 | Train Loss: 0.1615 | Val rms_score: 0.4043
132
+ 2025-09-26 09:41:28,817 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 9/100 | Train Loss: 0.1400 | Val rms_score: 0.3897
133
+ 2025-09-26 09:41:28,983 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Global step of best model: 603
134
+ 2025-09-26 09:41:29,525 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Best model saved at epoch 9 with val rms_score: 0.3897
135
+ 2025-09-26 09:41:45,741 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 10/100 | Train Loss: 0.1203 | Val rms_score: 0.4007
136
+ 2025-09-26 09:42:03,040 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 11/100 | Train Loss: 0.1005 | Val rms_score: 0.4709
137
+ 2025-09-26 09:42:19,074 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 12/100 | Train Loss: 0.0845 | Val rms_score: 0.4211
138
+ 2025-09-26 09:42:36,997 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 13/100 | Train Loss: 0.0979 | Val rms_score: 0.4103
139
+ 2025-09-26 09:42:54,510 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 14/100 | Train Loss: 0.0868 | Val rms_score: 0.4123
140
+ 2025-09-26 09:43:12,095 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 15/100 | Train Loss: 0.0695 | Val rms_score: 0.4014
141
+ 2025-09-26 09:43:29,758 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 16/100 | Train Loss: 0.0634 | Val rms_score: 0.4077
142
+ 2025-09-26 09:43:46,244 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 17/100 | Train Loss: 0.0609 | Val rms_score: 0.3965
143
+ 2025-09-26 09:44:03,786 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 18/100 | Train Loss: 0.0402 | Val rms_score: 0.4046
144
+ 2025-09-26 09:44:21,743 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 19/100 | Train Loss: 0.0492 | Val rms_score: 0.3957
145
+ 2025-09-26 09:44:38,135 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 20/100 | Train Loss: 0.0484 | Val rms_score: 0.4085
146
+ 2025-09-26 09:44:55,953 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 21/100 | Train Loss: 0.0427 | Val rms_score: 0.4028
147
+ 2025-09-26 09:45:11,780 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 22/100 | Train Loss: 0.0408 | Val rms_score: 0.4267
148
+ 2025-09-26 09:45:29,362 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 23/100 | Train Loss: 0.0541 | Val rms_score: 0.3981
149
+ 2025-09-26 09:45:47,116 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 24/100 | Train Loss: 0.0369 | Val rms_score: 0.4013
150
+ 2025-09-26 09:46:03,225 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 25/100 | Train Loss: 0.0385 | Val rms_score: 0.4070
151
+ 2025-09-26 09:46:20,864 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 26/100 | Train Loss: 0.0376 | Val rms_score: 0.3934
152
+ 2025-09-26 09:46:38,957 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 27/100 | Train Loss: 0.0345 | Val rms_score: 0.3955
153
+ 2025-09-26 09:46:57,023 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 28/100 | Train Loss: 0.0338 | Val rms_score: 0.3961
154
+ 2025-09-26 09:47:13,556 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 29/100 | Train Loss: 0.0311 | Val rms_score: 0.3941
155
+ 2025-09-26 09:47:32,041 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 30/100 | Train Loss: 0.0330 | Val rms_score: 0.3948
156
+ 2025-09-26 09:47:48,473 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 31/100 | Train Loss: 0.0331 | Val rms_score: 0.3940
157
+ 2025-09-26 09:48:06,153 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 32/100 | Train Loss: 0.0275 | Val rms_score: 0.3981
158
+ 2025-09-26 09:48:24,077 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 33/100 | Train Loss: 0.0211 | Val rms_score: 0.3955
159
+ 2025-09-26 09:48:42,459 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 34/100 | Train Loss: 0.0267 | Val rms_score: 0.3962
160
+ 2025-09-26 09:48:59,988 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 35/100 | Train Loss: 0.0250 | Val rms_score: 0.4001
161
+ 2025-09-26 09:49:17,746 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 36/100 | Train Loss: 0.0207 | Val rms_score: 0.3972
162
+ 2025-09-26 09:49:35,804 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 37/100 | Train Loss: 0.0227 | Val rms_score: 0.3962
163
+ 2025-09-26 09:49:53,647 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 38/100 | Train Loss: 0.0245 | Val rms_score: 0.3942
164
+ 2025-09-26 09:50:11,695 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 39/100 | Train Loss: 0.0230 | Val rms_score: 0.4035
165
+ 2025-09-26 09:50:30,126 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 40/100 | Train Loss: 0.0259 | Val rms_score: 0.3951
166
+ 2025-09-26 09:50:48,124 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 41/100 | Train Loss: 0.0219 | Val rms_score: 0.3948
167
+ 2025-09-26 09:51:06,701 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 42/100 | Train Loss: 0.0280 | Val rms_score: 0.3958
168
+ 2025-09-26 09:51:23,896 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 43/100 | Train Loss: 0.0224 | Val rms_score: 0.3942
169
+ 2025-09-26 09:51:42,244 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 44/100 | Train Loss: 0.0216 | Val rms_score: 0.3964
170
+ 2025-09-26 09:52:00,233 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 45/100 | Train Loss: 0.0242 | Val rms_score: 0.3949
171
+ 2025-09-26 09:52:18,109 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 46/100 | Train Loss: 0.0217 | Val rms_score: 0.3949
172
+ 2025-09-26 09:52:34,213 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 47/100 | Train Loss: 0.0220 | Val rms_score: 0.3973
173
+ 2025-09-26 09:52:52,027 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 48/100 | Train Loss: 0.0203 | Val rms_score: 0.3966
174
+ 2025-09-26 09:53:09,728 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 49/100 | Train Loss: 0.0196 | Val rms_score: 0.3950
175
+ 2025-09-26 09:53:28,292 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 50/100 | Train Loss: 0.0200 | Val rms_score: 0.3988
176
+ 2025-09-26 09:53:45,582 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 51/100 | Train Loss: 0.0199 | Val rms_score: 0.3931
177
+ 2025-09-26 09:54:03,862 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 52/100 | Train Loss: 0.0204 | Val rms_score: 0.3966
178
+ 2025-09-26 09:54:21,170 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 53/100 | Train Loss: 0.0194 | Val rms_score: 0.3931
179
+ 2025-09-26 09:54:38,989 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 54/100 | Train Loss: 0.0184 | Val rms_score: 0.3939
180
+ 2025-09-26 09:54:55,565 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 55/100 | Train Loss: 0.0188 | Val rms_score: 0.3963
181
+ 2025-09-26 09:55:13,723 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 56/100 | Train Loss: 0.0177 | Val rms_score: 0.3940
182
+ 2025-09-26 09:55:31,132 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 57/100 | Train Loss: 0.0174 | Val rms_score: 0.3921
183
+ 2025-09-26 09:55:48,799 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 58/100 | Train Loss: 0.0184 | Val rms_score: 0.3990
184
+ 2025-09-26 09:56:05,127 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 59/100 | Train Loss: 0.0181 | Val rms_score: 0.3955
185
+ 2025-09-26 09:56:23,788 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 60/100 | Train Loss: 0.0158 | Val rms_score: 0.3979
186
+ 2025-09-26 09:56:40,633 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 61/100 | Train Loss: 0.0180 | Val rms_score: 0.3973
187
+ 2025-09-26 09:56:58,854 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 62/100 | Train Loss: 0.0166 | Val rms_score: 0.3941
188
+ 2025-09-26 09:57:15,562 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 63/100 | Train Loss: 0.0173 | Val rms_score: 0.3964
189
+ 2025-09-26 09:57:32,869 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 64/100 | Train Loss: 0.0167 | Val rms_score: 0.3944
190
+ 2025-09-26 09:57:47,493 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 65/100 | Train Loss: 0.0168 | Val rms_score: 0.3951
191
+ 2025-09-26 09:58:03,701 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 66/100 | Train Loss: 0.0166 | Val rms_score: 0.3934
192
+ 2025-09-26 09:58:21,829 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 67/100 | Train Loss: 0.0169 | Val rms_score: 0.3935
193
+ 2025-09-26 09:58:39,102 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 68/100 | Train Loss: 0.0156 | Val rms_score: 0.3934
194
+ 2025-09-26 09:58:56,448 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 69/100 | Train Loss: 0.0157 | Val rms_score: 0.3961
195
+ 2025-09-26 09:59:12,654 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 70/100 | Train Loss: 0.0202 | Val rms_score: 0.3945
196
+ 2025-09-26 09:59:30,074 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 71/100 | Train Loss: 0.0175 | Val rms_score: 0.3946
197
+ 2025-09-26 09:59:47,954 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 72/100 | Train Loss: 0.0158 | Val rms_score: 0.3932
198
+ 2025-09-26 10:00:05,464 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 73/100 | Train Loss: 0.0162 | Val rms_score: 0.3977
199
+ 2025-09-26 10:00:22,966 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 74/100 | Train Loss: 0.0167 | Val rms_score: 0.3963
200
+ 2025-09-26 10:00:40,549 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 75/100 | Train Loss: 0.0137 | Val rms_score: 0.3922
201
+ 2025-09-26 10:00:57,157 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 76/100 | Train Loss: 0.0176 | Val rms_score: 0.3915
202
+ 2025-09-26 10:01:15,138 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 77/100 | Train Loss: 0.0256 | Val rms_score: 0.3967
203
+ 2025-09-26 10:01:31,910 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 78/100 | Train Loss: 0.0159 | Val rms_score: 0.4001
204
+ 2025-09-26 10:01:47,289 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 79/100 | Train Loss: 0.0199 | Val rms_score: 0.3983
205
+ 2025-09-26 10:02:04,797 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 80/100 | Train Loss: 0.0171 | Val rms_score: 0.3929
206
+ 2025-09-26 10:02:21,558 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 81/100 | Train Loss: 0.0204 | Val rms_score: 0.3937
207
+ 2025-09-26 10:02:39,757 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 82/100 | Train Loss: 0.0171 | Val rms_score: 0.3954
208
+ 2025-09-26 10:02:56,543 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 83/100 | Train Loss: 0.0175 | Val rms_score: 0.3938
209
+ 2025-09-26 10:03:12,896 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 84/100 | Train Loss: 0.0170 | Val rms_score: 0.3919
210
+ 2025-09-26 10:03:30,661 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 85/100 | Train Loss: 0.0168 | Val rms_score: 0.3941
211
+ 2025-09-26 10:03:46,696 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 86/100 | Train Loss: 0.0150 | Val rms_score: 0.3948
212
+ 2025-09-26 10:04:04,765 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 87/100 | Train Loss: 0.0152 | Val rms_score: 0.3916
213
+ 2025-09-26 10:04:22,075 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 88/100 | Train Loss: 0.0156 | Val rms_score: 0.3940
214
+ 2025-09-26 10:04:38,615 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 89/100 | Train Loss: 0.0152 | Val rms_score: 0.3944
215
+ 2025-09-26 10:04:57,265 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 90/100 | Train Loss: 0.0149 | Val rms_score: 0.3918
216
+ 2025-09-26 10:05:15,035 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 91/100 | Train Loss: 0.0146 | Val rms_score: 0.3957
217
+ 2025-09-26 10:05:33,084 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 92/100 | Train Loss: 0.0160 | Val rms_score: 0.3876
218
+ 2025-09-26 10:05:33,304 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Global step of best model: 6164
219
+ 2025-09-26 10:05:34,031 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Best model saved at epoch 92 with val rms_score: 0.3876
220
+ 2025-09-26 10:05:51,518 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 93/100 | Train Loss: 0.0147 | Val rms_score: 0.3920
221
+ 2025-09-26 10:06:08,567 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 94/100 | Train Loss: 0.0140 | Val rms_score: 0.3898
222
+ 2025-09-26 10:06:25,652 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 95/100 | Train Loss: 0.0150 | Val rms_score: 0.3921
223
+ 2025-09-26 10:06:41,591 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 96/100 | Train Loss: 0.0153 | Val rms_score: 0.3916
224
+ 2025-09-26 10:06:59,435 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 97/100 | Train Loss: 0.0160 | Val rms_score: 0.3929
225
+ 2025-09-26 10:07:17,085 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 98/100 | Train Loss: 0.0143 | Val rms_score: 0.3916
226
+ 2025-09-26 10:07:34,497 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 99/100 | Train Loss: 0.0135 | Val rms_score: 0.3938
227
+ 2025-09-26 10:07:51,329 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 100/100 | Train Loss: 0.0142 | Val rms_score: 0.3937
228
+ 2025-09-26 10:07:52,559 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Test rms_score: 0.4907
229
+ 2025-09-26 10:07:52,905 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Starting triplicate run 3 for dataset adme_permeability at 2025-09-26_10-07-52
230
+ 2025-09-26 10:08:07,401 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 1/100 | Train Loss: 0.6716 | Val rms_score: 0.4636
231
+ 2025-09-26 10:08:07,401 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Global step of best model: 67
232
+ 2025-09-26 10:08:08,067 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Best model saved at epoch 1 with val rms_score: 0.4636
233
+ 2025-09-26 10:08:25,502 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 2/100 | Train Loss: 0.4320 | Val rms_score: 0.4316
234
+ 2025-09-26 10:08:25,654 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Global step of best model: 134
235
+ 2025-09-26 10:08:26,242 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Best model saved at epoch 2 with val rms_score: 0.4316
236
+ 2025-09-26 10:08:43,902 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 3/100 | Train Loss: 0.0272 | Val rms_score: 0.4405
237
+ 2025-09-26 10:09:00,423 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 4/100 | Train Loss: 0.3209 | Val rms_score: 0.4462
238
+ 2025-09-26 10:09:17,449 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 5/100 | Train Loss: 0.2607 | Val rms_score: 0.4079
239
+ 2025-09-26 10:09:17,595 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Global step of best model: 335
240
+ 2025-09-26 10:09:18,140 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Best model saved at epoch 5 with val rms_score: 0.4079
241
+ 2025-09-26 10:09:33,268 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 6/100 | Train Loss: 0.0562 | Val rms_score: 0.4122
242
+ 2025-09-26 10:09:50,937 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 7/100 | Train Loss: 0.1772 | Val rms_score: 0.4175
243
+ 2025-09-26 10:10:06,292 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 8/100 | Train Loss: 0.1536 | Val rms_score: 0.4432
244
+ 2025-09-26 10:10:24,270 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 9/100 | Train Loss: 0.1471 | Val rms_score: 0.4070
245
+ 2025-09-26 10:10:24,445 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Global step of best model: 603
246
+ 2025-09-26 10:10:25,018 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Best model saved at epoch 9 with val rms_score: 0.4070
247
+ 2025-09-26 10:10:41,560 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 10/100 | Train Loss: 0.1409 | Val rms_score: 0.4054
248
+ 2025-09-26 10:10:41,707 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Global step of best model: 670
249
+ 2025-09-26 10:10:42,377 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Best model saved at epoch 10 with val rms_score: 0.4054
250
+ 2025-09-26 10:11:00,174 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 11/100 | Train Loss: 0.0929 | Val rms_score: 0.4106
251
+ 2025-09-26 10:11:16,818 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 12/100 | Train Loss: 0.1045 | Val rms_score: 0.3956
252
+ 2025-09-26 10:11:16,967 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Global step of best model: 804
253
+ 2025-09-26 10:11:17,536 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Best model saved at epoch 12 with val rms_score: 0.3956
254
+ 2025-09-26 10:11:35,813 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 13/100 | Train Loss: 0.1059 | Val rms_score: 0.4007
255
+ 2025-09-26 10:11:51,974 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 14/100 | Train Loss: 0.0863 | Val rms_score: 0.3908
256
+ 2025-09-26 10:11:52,126 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Global step of best model: 938
257
+ 2025-09-26 10:11:52,675 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Best model saved at epoch 14 with val rms_score: 0.3908
258
+ 2025-09-26 10:12:11,861 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 15/100 | Train Loss: 0.0498 | Val rms_score: 0.3886
259
+ 2025-09-26 10:12:12,043 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Global step of best model: 1005
260
+ 2025-09-26 10:12:12,571 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Best model saved at epoch 15 with val rms_score: 0.3886
261
+ 2025-09-26 10:12:30,132 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 16/100 | Train Loss: 0.0590 | Val rms_score: 0.3866
262
+ 2025-09-26 10:12:30,591 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Global step of best model: 1072
263
+ 2025-09-26 10:12:31,255 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Best model saved at epoch 16 with val rms_score: 0.3866
264
+ 2025-09-26 10:12:47,599 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 17/100 | Train Loss: 0.0585 | Val rms_score: 0.3937
265
+ 2025-09-26 10:13:03,720 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 18/100 | Train Loss: 0.0456 | Val rms_score: 0.3904
266
+ 2025-09-26 10:13:20,472 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 19/100 | Train Loss: 0.0464 | Val rms_score: 0.3914
267
+ 2025-09-26 10:13:38,436 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 20/100 | Train Loss: 0.0477 | Val rms_score: 0.3831
268
+ 2025-09-26 10:13:38,609 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Global step of best model: 1340
269
+ 2025-09-26 10:13:39,273 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Best model saved at epoch 20 with val rms_score: 0.3831
270
+ 2025-09-26 10:13:56,827 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 21/100 | Train Loss: 0.0371 | Val rms_score: 0.3899
271
+ 2025-09-26 10:14:13,527 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 22/100 | Train Loss: 0.0434 | Val rms_score: 0.3925
272
+ 2025-09-26 10:14:30,889 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 23/100 | Train Loss: 0.0476 | Val rms_score: 0.3940
273
+ 2025-09-26 10:14:48,827 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 24/100 | Train Loss: 0.0371 | Val rms_score: 0.3893
274
+ 2025-09-26 10:15:06,134 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 25/100 | Train Loss: 0.0385 | Val rms_score: 0.3829
275
+ 2025-09-26 10:15:06,295 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Global step of best model: 1675
276
+ 2025-09-26 10:15:06,862 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Best model saved at epoch 25 with val rms_score: 0.3829
277
+ 2025-09-26 10:15:24,098 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 26/100 | Train Loss: 0.0424 | Val rms_score: 0.3850
278
+ 2025-09-26 10:15:42,065 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 27/100 | Train Loss: 0.0510 | Val rms_score: 0.3896
279
+ 2025-09-26 10:15:58,127 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 28/100 | Train Loss: 0.0550 | Val rms_score: 0.3913
280
+ 2025-09-26 10:16:14,446 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 29/100 | Train Loss: 0.0342 | Val rms_score: 0.3862
281
+ 2025-09-26 10:16:32,833 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 30/100 | Train Loss: 0.0252 | Val rms_score: 0.3893
282
+ 2025-09-26 10:16:50,514 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 31/100 | Train Loss: 0.0290 | Val rms_score: 0.3859
283
+ 2025-09-26 10:17:08,555 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 32/100 | Train Loss: 0.0293 | Val rms_score: 0.3860
284
+ 2025-09-26 10:17:25,878 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 33/100 | Train Loss: 0.0268 | Val rms_score: 0.3854
285
+ 2025-09-26 10:17:43,027 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 34/100 | Train Loss: 0.0283 | Val rms_score: 0.3845
286
+ 2025-09-26 10:17:59,895 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 35/100 | Train Loss: 0.0243 | Val rms_score: 0.3858
287
+ 2025-09-26 10:18:15,776 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 36/100 | Train Loss: 0.0309 | Val rms_score: 0.3951
288
+ 2025-09-26 10:18:33,372 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 37/100 | Train Loss: 0.0394 | Val rms_score: 0.3825
289
+ 2025-09-26 10:18:33,544 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Global step of best model: 2479
290
+ 2025-09-26 10:18:34,163 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Best model saved at epoch 37 with val rms_score: 0.3825
291
+ 2025-09-26 10:18:51,359 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 38/100 | Train Loss: 0.0265 | Val rms_score: 0.3837
292
+ 2025-09-26 10:19:07,539 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 39/100 | Train Loss: 0.0222 | Val rms_score: 0.3863
293
+ 2025-09-26 10:19:25,498 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 40/100 | Train Loss: 0.0237 | Val rms_score: 0.3864
294
+ 2025-09-26 10:19:43,364 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 41/100 | Train Loss: 0.0221 | Val rms_score: 0.3864
295
+ 2025-09-26 10:20:01,485 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 42/100 | Train Loss: 0.0209 | Val rms_score: 0.3886
296
+ 2025-09-26 10:20:18,857 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 43/100 | Train Loss: 0.0238 | Val rms_score: 0.3842
297
+ 2025-09-26 10:20:35,375 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 44/100 | Train Loss: 0.0220 | Val rms_score: 0.3846
298
+ 2025-09-26 10:20:53,672 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 45/100 | Train Loss: 0.0211 | Val rms_score: 0.3861
299
+ 2025-09-26 10:21:11,240 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 46/100 | Train Loss: 0.0215 | Val rms_score: 0.3848
300
+ 2025-09-26 10:21:29,062 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 47/100 | Train Loss: 0.0210 | Val rms_score: 0.3875
301
+ 2025-09-26 10:21:46,405 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 48/100 | Train Loss: 0.0229 | Val rms_score: 0.3893
302
+ 2025-09-26 10:22:03,175 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 49/100 | Train Loss: 0.0229 | Val rms_score: 0.3887
303
+ 2025-09-26 10:22:19,794 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 50/100 | Train Loss: 0.0230 | Val rms_score: 0.4286
304
+ 2025-09-26 10:22:36,735 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 51/100 | Train Loss: 0.0317 | Val rms_score: 0.3947
305
+ 2025-09-26 10:22:55,093 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 52/100 | Train Loss: 0.0272 | Val rms_score: 0.3873
306
+ 2025-09-26 10:23:12,745 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 53/100 | Train Loss: 0.0283 | Val rms_score: 0.3900
307
+ 2025-09-26 10:23:29,424 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 54/100 | Train Loss: 0.0308 | Val rms_score: 0.3856
308
+ 2025-09-26 10:23:46,739 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 55/100 | Train Loss: 0.0244 | Val rms_score: 0.3861
309
+ 2025-09-26 10:24:02,561 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 56/100 | Train Loss: 0.0243 | Val rms_score: 0.3876
310
+ 2025-09-26 10:24:20,754 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 57/100 | Train Loss: 0.0225 | Val rms_score: 0.3929
311
+ 2025-09-26 10:24:37,854 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 58/100 | Train Loss: 0.0227 | Val rms_score: 0.3867
312
+ 2025-09-26 10:24:54,282 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 59/100 | Train Loss: 0.0215 | Val rms_score: 0.3858
313
+ 2025-09-26 10:25:12,559 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 60/100 | Train Loss: 0.0225 | Val rms_score: 0.3856
314
+ 2025-09-26 10:25:29,950 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 61/100 | Train Loss: 0.0201 | Val rms_score: 0.3863
315
+ 2025-09-26 10:25:47,983 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 62/100 | Train Loss: 0.0198 | Val rms_score: 0.3863
316
+ 2025-09-26 10:26:05,541 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 63/100 | Train Loss: 0.0178 | Val rms_score: 0.3858
317
+ 2025-09-26 10:26:22,214 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 64/100 | Train Loss: 0.0198 | Val rms_score: 0.3859
318
+ 2025-09-26 10:26:39,318 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 65/100 | Train Loss: 0.0256 | Val rms_score: 0.3844
319
+ 2025-09-26 10:26:56,162 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 66/100 | Train Loss: 0.0215 | Val rms_score: 0.3863
320
+ 2025-09-26 10:27:14,362 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 67/100 | Train Loss: 0.0199 | Val rms_score: 0.3850
321
+ 2025-09-26 10:27:32,404 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 68/100 | Train Loss: 0.0195 | Val rms_score: 0.3852
322
+ 2025-09-26 10:27:48,991 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 69/100 | Train Loss: 0.0165 | Val rms_score: 0.3870
323
+ 2025-09-26 10:28:05,922 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 70/100 | Train Loss: 0.0190 | Val rms_score: 0.3821
324
+ 2025-09-26 10:28:06,074 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Global step of best model: 4690
325
+ 2025-09-26 10:28:06,669 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Best model saved at epoch 70 with val rms_score: 0.3821
326
+ 2025-09-26 10:28:23,293 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 71/100 | Train Loss: 0.0199 | Val rms_score: 0.3813
327
+ 2025-09-26 10:28:23,817 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Global step of best model: 4757
328
+ 2025-09-26 10:28:24,507 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Best model saved at epoch 71 with val rms_score: 0.3813
329
+ 2025-09-26 10:28:41,772 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 72/100 | Train Loss: 0.0181 | Val rms_score: 0.3860
330
+ 2025-09-26 10:28:59,637 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 73/100 | Train Loss: 0.0275 | Val rms_score: 0.3870
331
+ 2025-09-26 10:29:16,912 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 74/100 | Train Loss: 0.0407 | Val rms_score: 0.3897
332
+ 2025-09-26 10:29:36,040 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 75/100 | Train Loss: 0.0187 | Val rms_score: 0.3859
333
+ 2025-09-26 10:29:53,527 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 76/100 | Train Loss: 0.0217 | Val rms_score: 0.3826
334
+ 2025-09-26 10:30:08,971 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 77/100 | Train Loss: 0.0192 | Val rms_score: 0.3835
335
+ 2025-09-26 10:30:26,865 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 78/100 | Train Loss: 0.0213 | Val rms_score: 0.3847
336
+ 2025-09-26 10:30:43,133 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 79/100 | Train Loss: 0.0326 | Val rms_score: 0.3819
337
+ 2025-09-26 10:31:00,510 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 80/100 | Train Loss: 0.0215 | Val rms_score: 0.3855
338
+ 2025-09-26 10:31:18,230 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 81/100 | Train Loss: 0.0197 | Val rms_score: 0.3820
339
+ 2025-09-26 10:31:35,890 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 82/100 | Train Loss: 0.0181 | Val rms_score: 0.3830
340
+ 2025-09-26 10:31:53,947 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 83/100 | Train Loss: 0.0178 | Val rms_score: 0.3842
341
+ 2025-09-26 10:32:11,054 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 84/100 | Train Loss: 0.0190 | Val rms_score: 0.3840
342
+ 2025-09-26 10:32:28,808 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 85/100 | Train Loss: 0.0176 | Val rms_score: 0.3822
343
+ 2025-09-26 10:32:46,714 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 86/100 | Train Loss: 0.0169 | Val rms_score: 0.3858
344
+ 2025-09-26 10:33:03,850 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 87/100 | Train Loss: 0.0156 | Val rms_score: 0.3861
345
+ 2025-09-26 10:33:21,853 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 88/100 | Train Loss: 0.0169 | Val rms_score: 0.3858
346
+ 2025-09-26 10:33:41,218 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 89/100 | Train Loss: 0.0164 | Val rms_score: 0.3854
347
+ 2025-09-26 10:33:58,233 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 90/100 | Train Loss: 0.0153 | Val rms_score: 0.3865
348
+ 2025-09-26 10:34:16,337 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 91/100 | Train Loss: 0.0160 | Val rms_score: 0.3863
349
+ 2025-09-26 10:34:34,975 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 92/100 | Train Loss: 0.0157 | Val rms_score: 0.3839
350
+ 2025-09-26 10:34:52,777 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 93/100 | Train Loss: 0.0151 | Val rms_score: 0.3843
351
+ 2025-09-26 10:35:11,191 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 94/100 | Train Loss: 0.0153 | Val rms_score: 0.3853
352
+ 2025-09-26 10:35:29,045 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 95/100 | Train Loss: 0.0155 | Val rms_score: 0.3862
353
+ 2025-09-26 10:35:47,752 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 96/100 | Train Loss: 0.0410 | Val rms_score: 0.4594
354
+ 2025-09-26 10:36:04,282 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 97/100 | Train Loss: 0.1101 | Val rms_score: 0.3819
355
+ 2025-09-26 10:36:22,930 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 98/100 | Train Loss: 0.0315 | Val rms_score: 0.3787
356
+ 2025-09-26 10:36:23,084 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Global step of best model: 6566
357
+ 2025-09-26 10:36:23,660 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Best model saved at epoch 98 with val rms_score: 0.3787
358
+ 2025-09-26 10:36:40,583 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 99/100 | Train Loss: 0.0271 | Val rms_score: 0.3783
359
+ 2025-09-26 10:36:40,771 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Global step of best model: 6633
360
+ 2025-09-26 10:36:41,325 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Best model saved at epoch 99 with val rms_score: 0.3783
361
+ 2025-09-26 10:36:59,151 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 100/100 | Train Loss: 0.0241 | Val rms_score: 0.3773
362
+ 2025-09-26 10:36:59,337 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Global step of best model: 6700
363
+ 2025-09-26 10:36:59,912 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Best model saved at epoch 100 with val rms_score: 0.3773
364
+ 2025-09-26 10:37:00,805 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Test rms_score: 0.4993
365
+ 2025-09-26 10:37:01,180 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Final Triplicate Test Results — Avg rms_score: 0.5042, Std Dev: 0.0135
logs_modchembert_regression_ModChemBERT-MLM-DAPT/modchembert_deepchem_splits_run_adme_ppb_h_epochs100_batch_size32_20250926_103701.log ADDED
@@ -0,0 +1,331 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ 2025-09-26 10:37:01,181 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Running benchmark for dataset: adme_ppb_h
2
+ 2025-09-26 10:37:01,181 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - dataset: adme_ppb_h, tasks: ['y'], epochs: 100, learning rate: 3e-05, transform: True
3
+ 2025-09-26 10:37:01,185 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Starting triplicate run 1 for dataset adme_ppb_h at 2025-09-26_10-37-01
4
+ 2025-09-26 10:37:03,724 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 1/100 | Train Loss: 0.8875 | Val rms_score: 0.8377
5
+ 2025-09-26 10:37:03,724 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Global step of best model: 5
6
+ 2025-09-26 10:37:04,515 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Best model saved at epoch 1 with val rms_score: 0.8377
7
+ 2025-09-26 10:37:07,850 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 2/100 | Train Loss: 0.4000 | Val rms_score: 0.7283
8
+ 2025-09-26 10:37:08,027 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Global step of best model: 10
9
+ 2025-09-26 10:37:08,610 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Best model saved at epoch 2 with val rms_score: 0.7283
10
+ 2025-09-26 10:37:11,889 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 3/100 | Train Loss: 0.2219 | Val rms_score: 0.6066
11
+ 2025-09-26 10:37:12,083 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Global step of best model: 15
12
+ 2025-09-26 10:37:12,641 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Best model saved at epoch 3 with val rms_score: 0.6066
13
+ 2025-09-26 10:37:15,702 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 4/100 | Train Loss: 0.1734 | Val rms_score: 0.6203
14
+ 2025-09-26 10:37:17,871 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 5/100 | Train Loss: 0.1289 | Val rms_score: 0.6309
15
+ 2025-09-26 10:37:20,522 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 6/100 | Train Loss: 0.0957 | Val rms_score: 0.6504
16
+ 2025-09-26 10:37:23,850 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 7/100 | Train Loss: 0.0836 | Val rms_score: 0.7016
17
+ 2025-09-26 10:37:27,063 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 8/100 | Train Loss: 0.0719 | Val rms_score: 0.7079
18
+ 2025-09-26 10:37:30,185 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 9/100 | Train Loss: 0.0613 | Val rms_score: 0.6884
19
+ 2025-09-26 10:37:32,872 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 10/100 | Train Loss: 0.0469 | Val rms_score: 0.6980
20
+ 2025-09-26 10:37:35,556 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 11/100 | Train Loss: 0.0477 | Val rms_score: 0.6799
21
+ 2025-09-26 10:37:38,518 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 12/100 | Train Loss: 0.0344 | Val rms_score: 0.6798
22
+ 2025-09-26 10:37:41,241 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 13/100 | Train Loss: 0.0328 | Val rms_score: 0.6858
23
+ 2025-09-26 10:37:43,522 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 14/100 | Train Loss: 0.0318 | Val rms_score: 0.6897
24
+ 2025-09-26 10:37:45,852 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 15/100 | Train Loss: 0.0262 | Val rms_score: 0.6961
25
+ 2025-09-26 10:37:47,380 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 16/100 | Train Loss: 0.0293 | Val rms_score: 0.6991
26
+ 2025-09-26 10:37:50,176 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 17/100 | Train Loss: 0.0221 | Val rms_score: 0.7271
27
+ 2025-09-26 10:37:52,993 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 18/100 | Train Loss: 0.0201 | Val rms_score: 0.6997
28
+ 2025-09-26 10:37:54,929 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 19/100 | Train Loss: 0.0187 | Val rms_score: 0.7188
29
+ 2025-09-26 10:37:57,659 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 20/100 | Train Loss: 0.0187 | Val rms_score: 0.6966
30
+ 2025-09-26 10:38:00,309 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 21/100 | Train Loss: 0.0163 | Val rms_score: 0.6924
31
+ 2025-09-26 10:38:03,101 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 22/100 | Train Loss: 0.0177 | Val rms_score: 0.7148
32
+ 2025-09-26 10:38:05,571 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 23/100 | Train Loss: 0.0141 | Val rms_score: 0.6936
33
+ 2025-09-26 10:38:07,977 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 24/100 | Train Loss: 0.0152 | Val rms_score: 0.7018
34
+ 2025-09-26 10:38:10,403 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 25/100 | Train Loss: 0.0118 | Val rms_score: 0.7050
35
+ 2025-09-26 10:38:12,370 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 26/100 | Train Loss: 0.0127 | Val rms_score: 0.7091
36
+ 2025-09-26 10:38:14,230 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 27/100 | Train Loss: 0.0108 | Val rms_score: 0.7000
37
+ 2025-09-26 10:38:17,059 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 28/100 | Train Loss: 0.0142 | Val rms_score: 0.7051
38
+ 2025-09-26 10:38:19,897 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 29/100 | Train Loss: 0.0128 | Val rms_score: 0.6998
39
+ 2025-09-26 10:38:22,550 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 30/100 | Train Loss: 0.0123 | Val rms_score: 0.7047
40
+ 2025-09-26 10:38:25,182 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 31/100 | Train Loss: 0.0113 | Val rms_score: 0.7120
41
+ 2025-09-26 10:38:27,900 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 32/100 | Train Loss: 0.0124 | Val rms_score: 0.7054
42
+ 2025-09-26 10:38:30,192 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 33/100 | Train Loss: 0.0117 | Val rms_score: 0.7118
43
+ 2025-09-26 10:38:32,406 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 34/100 | Train Loss: 0.0116 | Val rms_score: 0.6991
44
+ 2025-09-26 10:38:34,953 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 35/100 | Train Loss: 0.0127 | Val rms_score: 0.7128
45
+ 2025-09-26 10:38:37,376 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 36/100 | Train Loss: 0.0161 | Val rms_score: 0.6998
46
+ 2025-09-26 10:38:40,261 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 37/100 | Train Loss: 0.0121 | Val rms_score: 0.7018
47
+ 2025-09-26 10:38:42,660 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 38/100 | Train Loss: 0.0131 | Val rms_score: 0.6846
48
+ 2025-09-26 10:38:44,955 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 39/100 | Train Loss: 0.0146 | Val rms_score: 0.6892
49
+ 2025-09-26 10:38:47,337 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 40/100 | Train Loss: 0.0106 | Val rms_score: 0.6902
50
+ 2025-09-26 10:38:49,110 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 41/100 | Train Loss: 0.0126 | Val rms_score: 0.7170
51
+ 2025-09-26 10:38:51,171 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 42/100 | Train Loss: 0.0122 | Val rms_score: 0.7077
52
+ 2025-09-26 10:38:53,865 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 43/100 | Train Loss: 0.0145 | Val rms_score: 0.7248
53
+ 2025-09-26 10:38:56,327 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 44/100 | Train Loss: 0.0099 | Val rms_score: 0.7163
54
+ 2025-09-26 10:38:58,846 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 45/100 | Train Loss: 0.0096 | Val rms_score: 0.7089
55
+ 2025-09-26 10:39:01,378 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 46/100 | Train Loss: 0.0194 | Val rms_score: 0.7112
56
+ 2025-09-26 10:39:04,114 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 47/100 | Train Loss: 0.0131 | Val rms_score: 0.6902
57
+ 2025-09-26 10:39:06,405 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 48/100 | Train Loss: 0.0320 | Val rms_score: 0.6962
58
+ 2025-09-26 10:39:07,988 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 49/100 | Train Loss: 0.0312 | Val rms_score: 0.6996
59
+ 2025-09-26 10:39:10,238 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 50/100 | Train Loss: 0.0281 | Val rms_score: 0.6625
60
+ 2025-09-26 10:39:12,649 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 51/100 | Train Loss: 0.0190 | Val rms_score: 0.7144
61
+ 2025-09-26 10:39:15,482 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 52/100 | Train Loss: 0.0232 | Val rms_score: 0.6727
62
+ 2025-09-26 10:39:17,816 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 53/100 | Train Loss: 0.0148 | Val rms_score: 0.7438
63
+ 2025-09-26 10:39:20,163 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 54/100 | Train Loss: 0.0211 | Val rms_score: 0.6655
64
+ 2025-09-26 10:39:22,607 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 55/100 | Train Loss: 0.0137 | Val rms_score: 0.6971
65
+ 2025-09-26 10:39:24,874 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 56/100 | Train Loss: 0.0160 | Val rms_score: 0.6430
66
+ 2025-09-26 10:39:27,033 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 57/100 | Train Loss: 0.0142 | Val rms_score: 0.6948
67
+ 2025-09-26 10:39:29,618 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 58/100 | Train Loss: 0.0145 | Val rms_score: 0.6721
68
+ 2025-09-26 10:39:31,934 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 59/100 | Train Loss: 0.0115 | Val rms_score: 0.7031
69
+ 2025-09-26 10:39:34,325 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 60/100 | Train Loss: 0.0125 | Val rms_score: 0.6811
70
+ 2025-09-26 10:39:36,636 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 61/100 | Train Loss: 0.0073 | Val rms_score: 0.7025
71
+ 2025-09-26 10:39:39,259 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 62/100 | Train Loss: 0.0089 | Val rms_score: 0.6709
72
+ 2025-09-26 10:39:41,621 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 63/100 | Train Loss: 0.0094 | Val rms_score: 0.6763
73
+ 2025-09-26 10:39:44,153 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 64/100 | Train Loss: 0.0070 | Val rms_score: 0.6871
74
+ 2025-09-26 10:39:46,026 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 65/100 | Train Loss: 0.0071 | Val rms_score: 0.6902
75
+ 2025-09-26 10:39:48,392 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 66/100 | Train Loss: 0.0086 | Val rms_score: 0.6817
76
+ 2025-09-26 10:39:51,059 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 67/100 | Train Loss: 0.0078 | Val rms_score: 0.6939
77
+ 2025-09-26 10:39:53,375 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 68/100 | Train Loss: 0.0105 | Val rms_score: 0.6610
78
+ 2025-09-26 10:39:55,707 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 69/100 | Train Loss: 0.0087 | Val rms_score: 0.6639
79
+ 2025-09-26 10:39:58,291 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 70/100 | Train Loss: 0.0093 | Val rms_score: 0.6787
80
+ 2025-09-26 10:40:00,499 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 71/100 | Train Loss: 0.0087 | Val rms_score: 0.6806
81
+ 2025-09-26 10:40:03,166 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 72/100 | Train Loss: 0.0076 | Val rms_score: 0.6830
82
+ 2025-09-26 10:40:05,281 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 73/100 | Train Loss: 0.0064 | Val rms_score: 0.6712
83
+ 2025-09-26 10:40:07,677 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 74/100 | Train Loss: 0.0087 | Val rms_score: 0.6747
84
+ 2025-09-26 10:40:10,061 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 75/100 | Train Loss: 0.0074 | Val rms_score: 0.6884
85
+ 2025-09-26 10:40:12,472 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 76/100 | Train Loss: 0.0088 | Val rms_score: 0.6923
86
+ 2025-09-26 10:40:15,104 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 77/100 | Train Loss: 0.0081 | Val rms_score: 0.6775
87
+ 2025-09-26 10:40:17,464 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 78/100 | Train Loss: 0.0078 | Val rms_score: 0.6653
88
+ 2025-09-26 10:40:19,806 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 79/100 | Train Loss: 0.0077 | Val rms_score: 0.6867
89
+ 2025-09-26 10:40:22,191 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 80/100 | Train Loss: 0.0064 | Val rms_score: 0.6702
90
+ 2025-09-26 10:40:24,722 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 81/100 | Train Loss: 0.0089 | Val rms_score: 0.6808
91
+ 2025-09-26 10:40:27,685 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 82/100 | Train Loss: 0.0094 | Val rms_score: 0.6668
92
+ 2025-09-26 10:40:30,195 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 83/100 | Train Loss: 0.0125 | Val rms_score: 0.6876
93
+ 2025-09-26 10:40:32,618 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 84/100 | Train Loss: 0.0079 | Val rms_score: 0.6653
94
+ 2025-09-26 10:40:34,986 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 85/100 | Train Loss: 0.0088 | Val rms_score: 0.6741
95
+ 2025-09-26 10:40:37,360 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 86/100 | Train Loss: 0.0063 | Val rms_score: 0.6782
96
+ 2025-09-26 10:40:39,415 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 87/100 | Train Loss: 0.0084 | Val rms_score: 0.6748
97
+ 2025-09-26 10:40:41,055 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 88/100 | Train Loss: 0.0059 | Val rms_score: 0.6906
98
+ 2025-09-26 10:40:42,929 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 89/100 | Train Loss: 0.0070 | Val rms_score: 0.6817
99
+ 2025-09-26 10:40:45,744 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 90/100 | Train Loss: 0.0080 | Val rms_score: 0.6865
100
+ 2025-09-26 10:40:48,393 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 91/100 | Train Loss: 0.0089 | Val rms_score: 0.6855
101
+ 2025-09-26 10:40:51,284 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 92/100 | Train Loss: 0.0084 | Val rms_score: 0.6691
102
+ 2025-09-26 10:40:53,855 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 93/100 | Train Loss: 0.0081 | Val rms_score: 0.6763
103
+ 2025-09-26 10:40:56,232 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 94/100 | Train Loss: 0.0078 | Val rms_score: 0.6708
104
+ 2025-09-26 10:40:58,327 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 95/100 | Train Loss: 0.0072 | Val rms_score: 0.6647
105
+ 2025-09-26 10:41:00,051 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 96/100 | Train Loss: 0.0085 | Val rms_score: 0.6858
106
+ 2025-09-26 10:41:02,706 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 97/100 | Train Loss: 0.0082 | Val rms_score: 0.6820
107
+ 2025-09-26 10:41:05,160 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 98/100 | Train Loss: 0.0083 | Val rms_score: 0.6714
108
+ 2025-09-26 10:41:07,562 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 99/100 | Train Loss: 0.0058 | Val rms_score: 0.6695
109
+ 2025-09-26 10:41:09,983 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 100/100 | Train Loss: 0.0083 | Val rms_score: 0.6595
110
+ 2025-09-26 10:41:10,381 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Test rms_score: 0.8203
111
+ 2025-09-26 10:41:10,743 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Starting triplicate run 2 for dataset adme_ppb_h at 2025-09-26_10-41-10
112
+ 2025-09-26 10:41:12,809 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 1/100 | Train Loss: 0.8812 | Val rms_score: 0.7292
113
+ 2025-09-26 10:41:12,809 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Global step of best model: 5
114
+ 2025-09-26 10:41:13,531 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Best model saved at epoch 1 with val rms_score: 0.7292
115
+ 2025-09-26 10:41:15,422 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 2/100 | Train Loss: 0.3703 | Val rms_score: 0.6235
116
+ 2025-09-26 10:41:15,601 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Global step of best model: 10
117
+ 2025-09-26 10:41:16,165 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Best model saved at epoch 2 with val rms_score: 0.6235
118
+ 2025-09-26 10:41:19,107 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 3/100 | Train Loss: 0.2281 | Val rms_score: 0.6948
119
+ 2025-09-26 10:41:21,871 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 4/100 | Train Loss: 0.1969 | Val rms_score: 0.6098
120
+ 2025-09-26 10:41:22,056 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Global step of best model: 20
121
+ 2025-09-26 10:41:22,613 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Best model saved at epoch 4 with val rms_score: 0.6098
122
+ 2025-09-26 10:41:24,946 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 5/100 | Train Loss: 0.1508 | Val rms_score: 0.5953
123
+ 2025-09-26 10:41:25,129 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Global step of best model: 25
124
+ 2025-09-26 10:41:25,851 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Best model saved at epoch 5 with val rms_score: 0.5953
125
+ 2025-09-26 10:41:28,171 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 6/100 | Train Loss: 0.1102 | Val rms_score: 0.6402
126
+ 2025-09-26 10:41:30,876 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 7/100 | Train Loss: 0.0992 | Val rms_score: 0.5954
127
+ 2025-09-26 10:41:32,640 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 8/100 | Train Loss: 0.0785 | Val rms_score: 0.6263
128
+ 2025-09-26 10:41:34,999 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 9/100 | Train Loss: 0.0672 | Val rms_score: 0.6442
129
+ 2025-09-26 10:41:37,401 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 10/100 | Train Loss: 0.0547 | Val rms_score: 0.6258
130
+ 2025-09-26 10:41:40,098 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 11/100 | Train Loss: 0.0465 | Val rms_score: 0.6484
131
+ 2025-09-26 10:41:43,377 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 12/100 | Train Loss: 0.0465 | Val rms_score: 0.6308
132
+ 2025-09-26 10:41:46,039 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 13/100 | Train Loss: 0.0330 | Val rms_score: 0.6453
133
+ 2025-09-26 10:41:47,759 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 14/100 | Train Loss: 0.0291 | Val rms_score: 0.6580
134
+ 2025-09-26 10:41:50,001 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 15/100 | Train Loss: 0.0281 | Val rms_score: 0.6556
135
+ 2025-09-26 10:41:52,413 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 16/100 | Train Loss: 0.0242 | Val rms_score: 0.6597
136
+ 2025-09-26 10:41:55,289 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 17/100 | Train Loss: 0.0211 | Val rms_score: 0.6596
137
+ 2025-09-26 10:41:58,023 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 18/100 | Train Loss: 0.0224 | Val rms_score: 0.6447
138
+ 2025-09-26 10:42:00,580 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 19/100 | Train Loss: 0.0184 | Val rms_score: 0.6506
139
+ 2025-09-26 10:42:03,633 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 20/100 | Train Loss: 0.0182 | Val rms_score: 0.6499
140
+ 2025-09-26 10:42:06,193 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 21/100 | Train Loss: 0.0186 | Val rms_score: 0.6521
141
+ 2025-09-26 10:42:09,142 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 22/100 | Train Loss: 0.0170 | Val rms_score: 0.6476
142
+ 2025-09-26 10:42:12,037 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 23/100 | Train Loss: 0.0155 | Val rms_score: 0.6470
143
+ 2025-09-26 10:42:14,817 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 24/100 | Train Loss: 0.0146 | Val rms_score: 0.6525
144
+ 2025-09-26 10:42:17,563 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 25/100 | Train Loss: 0.0139 | Val rms_score: 0.6516
145
+ 2025-09-26 10:42:20,407 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 26/100 | Train Loss: 0.0157 | Val rms_score: 0.6430
146
+ 2025-09-26 10:42:22,506 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 27/100 | Train Loss: 0.0122 | Val rms_score: 0.6537
147
+ 2025-09-26 10:42:25,053 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 28/100 | Train Loss: 0.0123 | Val rms_score: 0.6513
148
+ 2025-09-26 10:42:27,550 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 29/100 | Train Loss: 0.0110 | Val rms_score: 0.6471
149
+ 2025-09-26 10:42:29,956 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 30/100 | Train Loss: 0.0111 | Val rms_score: 0.6578
150
+ 2025-09-26 10:42:32,495 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 31/100 | Train Loss: 0.0104 | Val rms_score: 0.6488
151
+ 2025-09-26 10:42:35,131 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 32/100 | Train Loss: 0.0092 | Val rms_score: 0.6466
152
+ 2025-09-26 10:42:37,502 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 33/100 | Train Loss: 0.0085 | Val rms_score: 0.6548
153
+ 2025-09-26 10:42:39,743 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 34/100 | Train Loss: 0.0106 | Val rms_score: 0.6488
154
+ 2025-09-26 10:42:41,469 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 35/100 | Train Loss: 0.0099 | Val rms_score: 0.6543
155
+ 2025-09-26 10:42:43,712 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 36/100 | Train Loss: 0.0101 | Val rms_score: 0.6679
156
+ 2025-09-26 10:42:46,537 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 37/100 | Train Loss: 0.0107 | Val rms_score: 0.6533
157
+ 2025-09-26 10:42:48,836 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 38/100 | Train Loss: 0.0085 | Val rms_score: 0.6561
158
+ 2025-09-26 10:42:51,227 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 39/100 | Train Loss: 0.0108 | Val rms_score: 0.6456
159
+ 2025-09-26 10:42:53,544 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 40/100 | Train Loss: 0.0095 | Val rms_score: 0.6706
160
+ 2025-09-26 10:42:55,881 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 41/100 | Train Loss: 0.0112 | Val rms_score: 0.6462
161
+ 2025-09-26 10:42:58,561 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 42/100 | Train Loss: 0.0099 | Val rms_score: 0.6560
162
+ 2025-09-26 10:43:00,223 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 43/100 | Train Loss: 0.0090 | Val rms_score: 0.6587
163
+ 2025-09-26 10:43:02,592 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 44/100 | Train Loss: 0.0116 | Val rms_score: 0.6463
164
+ 2025-09-26 10:43:04,990 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 45/100 | Train Loss: 0.0096 | Val rms_score: 0.6670
165
+ 2025-09-26 10:43:07,534 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 46/100 | Train Loss: 0.0078 | Val rms_score: 0.6545
166
+ 2025-09-26 10:43:10,496 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 47/100 | Train Loss: 0.0094 | Val rms_score: 0.6647
167
+ 2025-09-26 10:43:12,896 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 48/100 | Train Loss: 0.0097 | Val rms_score: 0.6500
168
+ 2025-09-26 10:43:15,294 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 49/100 | Train Loss: 0.0083 | Val rms_score: 0.6551
169
+ 2025-09-26 10:43:17,518 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 50/100 | Train Loss: 0.0094 | Val rms_score: 0.6538
170
+ 2025-09-26 10:43:19,480 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 51/100 | Train Loss: 0.0086 | Val rms_score: 0.6566
171
+ 2025-09-26 10:43:22,155 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 52/100 | Train Loss: 0.0088 | Val rms_score: 0.6554
172
+ 2025-09-26 10:43:24,866 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 53/100 | Train Loss: 0.0083 | Val rms_score: 0.6664
173
+ 2025-09-26 10:43:27,264 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 54/100 | Train Loss: 0.0115 | Val rms_score: 0.6316
174
+ 2025-09-26 10:43:28,971 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 55/100 | Train Loss: 0.0107 | Val rms_score: 0.6468
175
+ 2025-09-26 10:43:31,516 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 56/100 | Train Loss: 0.0094 | Val rms_score: 0.6426
176
+ 2025-09-26 10:43:34,180 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 57/100 | Train Loss: 0.0097 | Val rms_score: 0.6610
177
+ 2025-09-26 10:43:36,475 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 58/100 | Train Loss: 0.0088 | Val rms_score: 0.6576
178
+ 2025-09-26 10:43:38,303 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 59/100 | Train Loss: 0.0089 | Val rms_score: 0.6565
179
+ 2025-09-26 10:43:40,767 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 60/100 | Train Loss: 0.0083 | Val rms_score: 0.6381
180
+ 2025-09-26 10:43:43,161 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 61/100 | Train Loss: 0.0090 | Val rms_score: 0.6548
181
+ 2025-09-26 10:43:45,837 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 62/100 | Train Loss: 0.0077 | Val rms_score: 0.6530
182
+ 2025-09-26 10:43:48,137 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 63/100 | Train Loss: 0.0098 | Val rms_score: 0.6461
183
+ 2025-09-26 10:43:50,506 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 64/100 | Train Loss: 0.0099 | Val rms_score: 0.6574
184
+ 2025-09-26 10:43:52,919 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 65/100 | Train Loss: 0.0089 | Val rms_score: 0.6581
185
+ 2025-09-26 10:43:55,202 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 66/100 | Train Loss: 0.0103 | Val rms_score: 0.6533
186
+ 2025-09-26 10:43:57,852 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 67/100 | Train Loss: 0.0083 | Val rms_score: 0.6453
187
+ 2025-09-26 10:44:00,508 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 68/100 | Train Loss: 0.0093 | Val rms_score: 0.6326
188
+ 2025-09-26 10:44:02,800 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 69/100 | Train Loss: 0.0083 | Val rms_score: 0.6570
189
+ 2025-09-26 10:44:05,112 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 70/100 | Train Loss: 0.0086 | Val rms_score: 0.6476
190
+ 2025-09-26 10:44:07,548 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 71/100 | Train Loss: 0.0108 | Val rms_score: 0.6686
191
+ 2025-09-26 10:44:10,238 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 72/100 | Train Loss: 0.0095 | Val rms_score: 0.6465
192
+ 2025-09-26 10:44:12,605 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 73/100 | Train Loss: 0.0075 | Val rms_score: 0.6491
193
+ 2025-09-26 10:44:14,425 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 74/100 | Train Loss: 0.0079 | Val rms_score: 0.6460
194
+ 2025-09-26 10:44:16,781 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 75/100 | Train Loss: 0.0096 | Val rms_score: 0.6701
195
+ 2025-09-26 10:44:19,156 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 76/100 | Train Loss: 0.0117 | Val rms_score: 0.6462
196
+ 2025-09-26 10:44:21,794 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 77/100 | Train Loss: 0.0090 | Val rms_score: 0.6569
197
+ 2025-09-26 10:44:24,169 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 78/100 | Train Loss: 0.0094 | Val rms_score: 0.6422
198
+ 2025-09-26 10:44:26,549 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 79/100 | Train Loss: 0.0070 | Val rms_score: 0.6531
199
+ 2025-09-26 10:44:28,968 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 80/100 | Train Loss: 0.0083 | Val rms_score: 0.6416
200
+ 2025-09-26 10:44:31,304 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 81/100 | Train Loss: 0.0096 | Val rms_score: 0.6449
201
+ 2025-09-26 10:44:33,668 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 82/100 | Train Loss: 0.0086 | Val rms_score: 0.6450
202
+ 2025-09-26 10:44:36,113 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 83/100 | Train Loss: 0.0071 | Val rms_score: 0.6521
203
+ 2025-09-26 10:44:38,533 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 84/100 | Train Loss: 0.0066 | Val rms_score: 0.6573
204
+ 2025-09-26 10:44:40,912 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 85/100 | Train Loss: 0.0094 | Val rms_score: 0.6427
205
+ 2025-09-26 10:44:43,473 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 86/100 | Train Loss: 0.0070 | Val rms_score: 0.6668
206
+ 2025-09-26 10:44:46,139 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 87/100 | Train Loss: 0.0068 | Val rms_score: 0.6407
207
+ 2025-09-26 10:44:48,604 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 88/100 | Train Loss: 0.0073 | Val rms_score: 0.6647
208
+ 2025-09-26 10:44:50,398 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 89/100 | Train Loss: 0.0087 | Val rms_score: 0.6369
209
+ 2025-09-26 10:44:52,813 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 90/100 | Train Loss: 0.0079 | Val rms_score: 0.6509
210
+ 2025-09-26 10:44:55,212 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 91/100 | Train Loss: 0.0075 | Val rms_score: 0.6320
211
+ 2025-09-26 10:44:57,984 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 92/100 | Train Loss: 0.0069 | Val rms_score: 0.6641
212
+ 2025-09-26 10:45:00,381 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 93/100 | Train Loss: 0.0074 | Val rms_score: 0.6525
213
+ 2025-09-26 10:45:02,736 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 94/100 | Train Loss: 0.0067 | Val rms_score: 0.6498
214
+ 2025-09-26 10:45:05,293 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 95/100 | Train Loss: 0.0072 | Val rms_score: 0.6548
215
+ 2025-09-26 10:45:07,584 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 96/100 | Train Loss: 0.0071 | Val rms_score: 0.6316
216
+ 2025-09-26 10:45:09,875 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 97/100 | Train Loss: 0.0079 | Val rms_score: 0.6602
217
+ 2025-09-26 10:45:12,682 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 98/100 | Train Loss: 0.0090 | Val rms_score: 0.6401
218
+ 2025-09-26 10:45:14,944 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 99/100 | Train Loss: 0.0090 | Val rms_score: 0.6654
219
+ 2025-09-26 10:45:17,328 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 100/100 | Train Loss: 0.0089 | Val rms_score: 0.6290
220
+ 2025-09-26 10:45:17,639 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Test rms_score: 0.7707
221
+ 2025-09-26 10:45:17,991 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Starting triplicate run 3 for dataset adme_ppb_h at 2025-09-26_10-45-17
222
+ 2025-09-26 10:45:20,212 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 1/100 | Train Loss: 1.4000 | Val rms_score: 0.8126
223
+ 2025-09-26 10:45:20,212 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Global step of best model: 5
224
+ 2025-09-26 10:45:20,811 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Best model saved at epoch 1 with val rms_score: 0.8126
225
+ 2025-09-26 10:45:23,199 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 2/100 | Train Loss: 0.4531 | Val rms_score: 0.8507
226
+ 2025-09-26 10:45:25,619 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 3/100 | Train Loss: 0.2906 | Val rms_score: 0.7092
227
+ 2025-09-26 10:45:25,801 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Global step of best model: 15
228
+ 2025-09-26 10:45:26,550 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Best model saved at epoch 3 with val rms_score: 0.7092
229
+ 2025-09-26 10:45:29,251 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 4/100 | Train Loss: 0.2016 | Val rms_score: 0.7244
230
+ 2025-09-26 10:45:31,770 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 5/100 | Train Loss: 0.1609 | Val rms_score: 0.6539
231
+ 2025-09-26 10:45:31,991 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Global step of best model: 25
232
+ 2025-09-26 10:45:32,586 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Best model saved at epoch 5 with val rms_score: 0.6539
233
+ 2025-09-26 10:45:34,970 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 6/100 | Train Loss: 0.1242 | Val rms_score: 0.6496
234
+ 2025-09-26 10:45:35,613 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Global step of best model: 30
235
+ 2025-09-26 10:45:36,266 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Best model saved at epoch 6 with val rms_score: 0.6496
236
+ 2025-09-26 10:45:39,182 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 7/100 | Train Loss: 0.1000 | Val rms_score: 0.6657
237
+ 2025-09-26 10:45:41,720 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 8/100 | Train Loss: 0.1000 | Val rms_score: 0.6775
238
+ 2025-09-26 10:45:43,351 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 9/100 | Train Loss: 0.0805 | Val rms_score: 0.7027
239
+ 2025-09-26 10:45:45,114 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 10/100 | Train Loss: 0.0668 | Val rms_score: 0.6869
240
+ 2025-09-26 10:45:47,623 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 11/100 | Train Loss: 0.0609 | Val rms_score: 0.6743
241
+ 2025-09-26 10:45:50,345 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 12/100 | Train Loss: 0.0535 | Val rms_score: 0.6987
242
+ 2025-09-26 10:45:52,968 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 13/100 | Train Loss: 0.0516 | Val rms_score: 0.7099
243
+ 2025-09-26 10:45:55,516 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 14/100 | Train Loss: 0.0492 | Val rms_score: 0.6879
244
+ 2025-09-26 10:45:58,355 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 15/100 | Train Loss: 0.0471 | Val rms_score: 0.6871
245
+ 2025-09-26 10:45:59,935 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 16/100 | Train Loss: 0.0391 | Val rms_score: 0.7090
246
+ 2025-09-26 10:46:02,627 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 17/100 | Train Loss: 0.0367 | Val rms_score: 0.6916
247
+ 2025-09-26 10:46:05,011 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 18/100 | Train Loss: 0.0336 | Val rms_score: 0.6859
248
+ 2025-09-26 10:46:07,468 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 19/100 | Train Loss: 0.0287 | Val rms_score: 0.7013
249
+ 2025-09-26 10:46:09,856 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 20/100 | Train Loss: 0.0258 | Val rms_score: 0.7021
250
+ 2025-09-26 10:46:12,148 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 21/100 | Train Loss: 0.0254 | Val rms_score: 0.6969
251
+ 2025-09-26 10:46:14,784 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 22/100 | Train Loss: 0.0247 | Val rms_score: 0.6947
252
+ 2025-09-26 10:46:17,127 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 23/100 | Train Loss: 0.0211 | Val rms_score: 0.7098
253
+ 2025-09-26 10:46:19,319 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 24/100 | Train Loss: 0.0270 | Val rms_score: 0.7063
254
+ 2025-09-26 10:46:21,052 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 25/100 | Train Loss: 0.0244 | Val rms_score: 0.6864
255
+ 2025-09-26 10:46:23,430 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 26/100 | Train Loss: 0.0236 | Val rms_score: 0.7063
256
+ 2025-09-26 10:46:26,084 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 27/100 | Train Loss: 0.0213 | Val rms_score: 0.6991
257
+ 2025-09-26 10:46:28,528 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 28/100 | Train Loss: 0.0180 | Val rms_score: 0.7035
258
+ 2025-09-26 10:46:30,834 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 29/100 | Train Loss: 0.0178 | Val rms_score: 0.7139
259
+ 2025-09-26 10:46:33,293 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 30/100 | Train Loss: 0.0156 | Val rms_score: 0.7090
260
+ 2025-09-26 10:46:35,657 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 31/100 | Train Loss: 0.0147 | Val rms_score: 0.7029
261
+ 2025-09-26 10:46:37,713 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 32/100 | Train Loss: 0.0140 | Val rms_score: 0.7013
262
+ 2025-09-26 10:46:39,850 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 33/100 | Train Loss: 0.0127 | Val rms_score: 0.6948
263
+ 2025-09-26 10:46:42,191 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 34/100 | Train Loss: 0.0123 | Val rms_score: 0.6965
264
+ 2025-09-26 10:46:44,754 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 35/100 | Train Loss: 0.0146 | Val rms_score: 0.7184
265
+ 2025-09-26 10:46:46,996 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 36/100 | Train Loss: 0.0150 | Val rms_score: 0.7040
266
+ 2025-09-26 10:46:49,712 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 37/100 | Train Loss: 0.0116 | Val rms_score: 0.7068
267
+ 2025-09-26 10:46:52,139 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 38/100 | Train Loss: 0.0123 | Val rms_score: 0.7132
268
+ 2025-09-26 10:46:54,487 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 39/100 | Train Loss: 0.0138 | Val rms_score: 0.7135
269
+ 2025-09-26 10:46:56,326 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 40/100 | Train Loss: 0.0109 | Val rms_score: 0.7049
270
+ 2025-09-26 10:46:58,384 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 41/100 | Train Loss: 0.0117 | Val rms_score: 0.7027
271
+ 2025-09-26 10:47:01,102 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 42/100 | Train Loss: 0.0128 | Val rms_score: 0.6994
272
+ 2025-09-26 10:47:03,842 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 43/100 | Train Loss: 0.0119 | Val rms_score: 0.7007
273
+ 2025-09-26 10:47:06,186 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 44/100 | Train Loss: 0.0130 | Val rms_score: 0.7024
274
+ 2025-09-26 10:47:08,646 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 45/100 | Train Loss: 0.0134 | Val rms_score: 0.7158
275
+ 2025-09-26 10:47:11,038 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 46/100 | Train Loss: 0.0139 | Val rms_score: 0.6944
276
+ 2025-09-26 10:47:13,818 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 47/100 | Train Loss: 0.0090 | Val rms_score: 0.6945
277
+ 2025-09-26 10:47:15,912 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 48/100 | Train Loss: 0.0128 | Val rms_score: 0.6883
278
+ 2025-09-26 10:47:18,513 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 49/100 | Train Loss: 0.0112 | Val rms_score: 0.7013
279
+ 2025-09-26 10:47:20,915 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 50/100 | Train Loss: 0.0122 | Val rms_score: 0.7064
280
+ 2025-09-26 10:47:23,377 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 51/100 | Train Loss: 0.0119 | Val rms_score: 0.6990
281
+ 2025-09-26 10:47:26,118 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 52/100 | Train Loss: 0.0112 | Val rms_score: 0.7007
282
+ 2025-09-26 10:47:28,493 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 53/100 | Train Loss: 0.0128 | Val rms_score: 0.7019
283
+ 2025-09-26 10:47:30,774 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 54/100 | Train Loss: 0.0089 | Val rms_score: 0.6955
284
+ 2025-09-26 10:47:32,550 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 55/100 | Train Loss: 0.0108 | Val rms_score: 0.7113
285
+ 2025-09-26 10:47:34,823 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 56/100 | Train Loss: 0.0103 | Val rms_score: 0.7037
286
+ 2025-09-26 10:47:37,545 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 57/100 | Train Loss: 0.0127 | Val rms_score: 0.7098
287
+ 2025-09-26 10:47:40,009 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 58/100 | Train Loss: 0.0083 | Val rms_score: 0.6991
288
+ 2025-09-26 10:47:42,492 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 59/100 | Train Loss: 0.0116 | Val rms_score: 0.6897
289
+ 2025-09-26 10:47:44,854 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 60/100 | Train Loss: 0.0126 | Val rms_score: 0.6917
290
+ 2025-09-26 10:47:47,222 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 61/100 | Train Loss: 0.0106 | Val rms_score: 0.6974
291
+ 2025-09-26 10:47:49,872 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 62/100 | Train Loss: 0.0093 | Val rms_score: 0.6938
292
+ 2025-09-26 10:47:51,926 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 63/100 | Train Loss: 0.0086 | Val rms_score: 0.6957
293
+ 2025-09-26 10:47:54,327 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 64/100 | Train Loss: 0.0091 | Val rms_score: 0.6925
294
+ 2025-09-26 10:47:56,855 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 65/100 | Train Loss: 0.0091 | Val rms_score: 0.6971
295
+ 2025-09-26 10:47:59,264 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 66/100 | Train Loss: 0.0094 | Val rms_score: 0.7043
296
+ 2025-09-26 10:48:02,000 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 67/100 | Train Loss: 0.0097 | Val rms_score: 0.6938
297
+ 2025-09-26 10:48:04,301 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 68/100 | Train Loss: 0.0093 | Val rms_score: 0.7016
298
+ 2025-09-26 10:48:06,698 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 69/100 | Train Loss: 0.0099 | Val rms_score: 0.6996
299
+ 2025-09-26 10:48:09,011 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 70/100 | Train Loss: 0.0100 | Val rms_score: 0.6981
300
+ 2025-09-26 10:48:11,038 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 71/100 | Train Loss: 0.0106 | Val rms_score: 0.6984
301
+ 2025-09-26 10:48:13,689 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 72/100 | Train Loss: 0.0080 | Val rms_score: 0.6910
302
+ 2025-09-26 10:48:15,983 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 73/100 | Train Loss: 0.0085 | Val rms_score: 0.6922
303
+ 2025-09-26 10:48:18,289 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 74/100 | Train Loss: 0.0079 | Val rms_score: 0.6968
304
+ 2025-09-26 10:48:20,719 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 75/100 | Train Loss: 0.0088 | Val rms_score: 0.6905
305
+ 2025-09-26 10:48:23,088 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 76/100 | Train Loss: 0.0133 | Val rms_score: 0.6949
306
+ 2025-09-26 10:48:25,726 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 77/100 | Train Loss: 0.0107 | Val rms_score: 0.6937
307
+ 2025-09-26 10:48:28,030 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 78/100 | Train Loss: 0.0146 | Val rms_score: 0.6869
308
+ 2025-09-26 10:48:29,744 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 79/100 | Train Loss: 0.0114 | Val rms_score: 0.6994
309
+ 2025-09-26 10:48:32,114 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 80/100 | Train Loss: 0.0106 | Val rms_score: 0.6877
310
+ 2025-09-26 10:48:34,483 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 81/100 | Train Loss: 0.0116 | Val rms_score: 0.6825
311
+ 2025-09-26 10:48:37,093 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 82/100 | Train Loss: 0.0122 | Val rms_score: 0.6804
312
+ 2025-09-26 10:48:39,662 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 83/100 | Train Loss: 0.0118 | Val rms_score: 0.6977
313
+ 2025-09-26 10:48:42,227 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 84/100 | Train Loss: 0.0107 | Val rms_score: 0.6844
314
+ 2025-09-26 10:48:44,617 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 85/100 | Train Loss: 0.0089 | Val rms_score: 0.6771
315
+ 2025-09-26 10:48:46,550 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 86/100 | Train Loss: 0.0086 | Val rms_score: 0.6793
316
+ 2025-09-26 10:48:48,589 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 87/100 | Train Loss: 0.0080 | Val rms_score: 0.6749
317
+ 2025-09-26 10:48:50,987 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 88/100 | Train Loss: 0.0083 | Val rms_score: 0.6743
318
+ 2025-09-26 10:48:53,488 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 89/100 | Train Loss: 0.0123 | Val rms_score: 0.6813
319
+ 2025-09-26 10:48:55,862 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 90/100 | Train Loss: 0.0095 | Val rms_score: 0.6718
320
+ 2025-09-26 10:48:58,203 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 91/100 | Train Loss: 0.0112 | Val rms_score: 0.6879
321
+ 2025-09-26 10:49:00,911 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 92/100 | Train Loss: 0.0091 | Val rms_score: 0.6876
322
+ 2025-09-26 10:49:03,217 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 93/100 | Train Loss: 0.0111 | Val rms_score: 0.6775
323
+ 2025-09-26 10:49:05,548 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 94/100 | Train Loss: 0.0100 | Val rms_score: 0.6717
324
+ 2025-09-26 10:49:07,368 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 95/100 | Train Loss: 0.0105 | Val rms_score: 0.6706
325
+ 2025-09-26 10:49:09,875 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 96/100 | Train Loss: 0.0088 | Val rms_score: 0.6844
326
+ 2025-09-26 10:49:12,671 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 97/100 | Train Loss: 0.0094 | Val rms_score: 0.6751
327
+ 2025-09-26 10:49:15,028 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 98/100 | Train Loss: 0.0097 | Val rms_score: 0.6791
328
+ 2025-09-26 10:49:17,512 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 99/100 | Train Loss: 0.0083 | Val rms_score: 0.6812
329
+ 2025-09-26 10:49:20,106 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 100/100 | Train Loss: 0.0072 | Val rms_score: 0.6845
330
+ 2025-09-26 10:49:20,425 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Test rms_score: 0.9219
331
+ 2025-09-26 10:49:20,808 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Final Triplicate Test Results — Avg rms_score: 0.8376, Std Dev: 0.0629
logs_modchembert_regression_ModChemBERT-MLM-DAPT/modchembert_deepchem_splits_run_adme_ppb_r_epochs100_batch_size32_20250926_104920.log ADDED
@@ -0,0 +1,333 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ 2025-09-26 10:49:20,810 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Running benchmark for dataset: adme_ppb_r
2
+ 2025-09-26 10:49:20,810 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - dataset: adme_ppb_r, tasks: ['y'], epochs: 100, learning rate: 3e-05, transform: True
3
+ 2025-09-26 10:49:20,813 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Starting triplicate run 1 for dataset adme_ppb_r at 2025-09-26_10-49-20
4
+ 2025-09-26 10:49:23,249 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 1/100 | Train Loss: 1.1938 | Val rms_score: 0.9679
5
+ 2025-09-26 10:49:23,249 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Global step of best model: 5
6
+ 2025-09-26 10:49:23,914 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Best model saved at epoch 1 with val rms_score: 0.9679
7
+ 2025-09-26 10:49:26,423 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 2/100 | Train Loss: 0.4688 | Val rms_score: 0.4769
8
+ 2025-09-26 10:49:26,609 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Global step of best model: 10
9
+ 2025-09-26 10:49:27,187 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Best model saved at epoch 2 with val rms_score: 0.4769
10
+ 2025-09-26 10:49:29,603 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 3/100 | Train Loss: 0.3609 | Val rms_score: 0.5144
11
+ 2025-09-26 10:49:31,946 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 4/100 | Train Loss: 0.2844 | Val rms_score: 0.5117
12
+ 2025-09-26 10:49:34,283 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 5/100 | Train Loss: 0.2375 | Val rms_score: 0.4999
13
+ 2025-09-26 10:49:36,716 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 6/100 | Train Loss: 0.1734 | Val rms_score: 0.5364
14
+ 2025-09-26 10:49:39,297 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 7/100 | Train Loss: 0.1266 | Val rms_score: 0.5379
15
+ 2025-09-26 10:49:41,646 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 8/100 | Train Loss: 0.1117 | Val rms_score: 0.5189
16
+ 2025-09-26 10:49:43,343 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 9/100 | Train Loss: 0.0969 | Val rms_score: 0.5272
17
+ 2025-09-26 10:49:45,646 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 10/100 | Train Loss: 0.0816 | Val rms_score: 0.5232
18
+ 2025-09-26 10:49:48,305 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 11/100 | Train Loss: 0.1000 | Val rms_score: 0.5404
19
+ 2025-09-26 10:49:51,367 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 12/100 | Train Loss: 0.0781 | Val rms_score: 0.5538
20
+ 2025-09-26 10:49:54,058 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 13/100 | Train Loss: 0.0539 | Val rms_score: 0.5994
21
+ 2025-09-26 10:49:56,270 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 14/100 | Train Loss: 0.0500 | Val rms_score: 0.5855
22
+ 2025-09-26 10:49:58,704 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 15/100 | Train Loss: 0.0381 | Val rms_score: 0.5776
23
+ 2025-09-26 10:50:01,007 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 16/100 | Train Loss: 0.0437 | Val rms_score: 0.5854
24
+ 2025-09-26 10:50:03,953 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 17/100 | Train Loss: 0.0344 | Val rms_score: 0.6168
25
+ 2025-09-26 10:50:06,122 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 18/100 | Train Loss: 0.0330 | Val rms_score: 0.6278
26
+ 2025-09-26 10:50:08,632 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 19/100 | Train Loss: 0.0303 | Val rms_score: 0.6164
27
+ 2025-09-26 10:50:11,378 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 20/100 | Train Loss: 0.0287 | Val rms_score: 0.6225
28
+ 2025-09-26 10:50:13,672 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 21/100 | Train Loss: 0.0367 | Val rms_score: 0.6047
29
+ 2025-09-26 10:50:15,810 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 22/100 | Train Loss: 0.0219 | Val rms_score: 0.5995
30
+ 2025-09-26 10:50:18,404 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 23/100 | Train Loss: 0.0230 | Val rms_score: 0.6497
31
+ 2025-09-26 10:50:21,141 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 24/100 | Train Loss: 0.0230 | Val rms_score: 0.6712
32
+ 2025-09-26 10:50:23,505 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 25/100 | Train Loss: 0.0195 | Val rms_score: 0.6500
33
+ 2025-09-26 10:50:25,797 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 26/100 | Train Loss: 0.0181 | Val rms_score: 0.6289
34
+ 2025-09-26 10:50:28,420 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 27/100 | Train Loss: 0.0168 | Val rms_score: 0.6256
35
+ 2025-09-26 10:50:30,637 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 28/100 | Train Loss: 0.0143 | Val rms_score: 0.6392
36
+ 2025-09-26 10:50:32,467 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 29/100 | Train Loss: 0.0136 | Val rms_score: 0.6380
37
+ 2025-09-26 10:50:34,960 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 30/100 | Train Loss: 0.0112 | Val rms_score: 0.6350
38
+ 2025-09-26 10:50:37,195 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 31/100 | Train Loss: 0.0152 | Val rms_score: 0.6284
39
+ 2025-09-26 10:50:40,027 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 32/100 | Train Loss: 0.0114 | Val rms_score: 0.6385
40
+ 2025-09-26 10:50:42,392 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 33/100 | Train Loss: 0.0135 | Val rms_score: 0.6347
41
+ 2025-09-26 10:50:44,742 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 34/100 | Train Loss: 0.0137 | Val rms_score: 0.6267
42
+ 2025-09-26 10:50:47,013 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 35/100 | Train Loss: 0.0138 | Val rms_score: 0.6310
43
+ 2025-09-26 10:50:49,352 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 36/100 | Train Loss: 0.0135 | Val rms_score: 0.6354
44
+ 2025-09-26 10:50:51,662 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 37/100 | Train Loss: 0.0131 | Val rms_score: 0.6277
45
+ 2025-09-26 10:50:54,001 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 38/100 | Train Loss: 0.0110 | Val rms_score: 0.6245
46
+ 2025-09-26 10:50:56,230 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 39/100 | Train Loss: 0.0131 | Val rms_score: 0.6200
47
+ 2025-09-26 10:50:58,669 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 40/100 | Train Loss: 0.0105 | Val rms_score: 0.6264
48
+ 2025-09-26 10:51:00,981 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 41/100 | Train Loss: 0.0120 | Val rms_score: 0.6285
49
+ 2025-09-26 10:51:03,678 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 42/100 | Train Loss: 0.0129 | Val rms_score: 0.5958
50
+ 2025-09-26 10:51:05,996 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 43/100 | Train Loss: 0.0100 | Val rms_score: 0.6052
51
+ 2025-09-26 10:51:08,357 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 44/100 | Train Loss: 0.0141 | Val rms_score: 0.5977
52
+ 2025-09-26 10:51:10,218 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 45/100 | Train Loss: 0.0118 | Val rms_score: 0.5792
53
+ 2025-09-26 10:51:12,505 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 46/100 | Train Loss: 0.0104 | Val rms_score: 0.5981
54
+ 2025-09-26 10:51:15,124 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 47/100 | Train Loss: 0.0126 | Val rms_score: 0.6019
55
+ 2025-09-26 10:51:17,428 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 48/100 | Train Loss: 0.0110 | Val rms_score: 0.5944
56
+ 2025-09-26 10:51:19,792 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 49/100 | Train Loss: 0.0091 | Val rms_score: 0.5920
57
+ 2025-09-26 10:51:22,109 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 50/100 | Train Loss: 0.0107 | Val rms_score: 0.5908
58
+ 2025-09-26 10:51:24,398 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 51/100 | Train Loss: 0.0115 | Val rms_score: 0.5839
59
+ 2025-09-26 10:51:27,035 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 52/100 | Train Loss: 0.0100 | Val rms_score: 0.5797
60
+ 2025-09-26 10:51:28,762 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 53/100 | Train Loss: 0.0122 | Val rms_score: 0.5896
61
+ 2025-09-26 10:51:30,689 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 54/100 | Train Loss: 0.0124 | Val rms_score: 0.5706
62
+ 2025-09-26 10:51:33,218 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 55/100 | Train Loss: 0.0125 | Val rms_score: 0.5868
63
+ 2025-09-26 10:51:35,544 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 56/100 | Train Loss: 0.0143 | Val rms_score: 0.6014
64
+ 2025-09-26 10:51:38,262 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 57/100 | Train Loss: 0.0096 | Val rms_score: 0.5865
65
+ 2025-09-26 10:51:40,578 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 58/100 | Train Loss: 0.0117 | Val rms_score: 0.6045
66
+ 2025-09-26 10:51:42,964 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 59/100 | Train Loss: 0.0134 | Val rms_score: 0.6070
67
+ 2025-09-26 10:51:45,259 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 60/100 | Train Loss: 0.0121 | Val rms_score: 0.5911
68
+ 2025-09-26 10:51:47,327 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 61/100 | Train Loss: 0.0131 | Val rms_score: 0.6032
69
+ 2025-09-26 10:51:49,450 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 62/100 | Train Loss: 0.0097 | Val rms_score: 0.6159
70
+ 2025-09-26 10:51:52,044 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 63/100 | Train Loss: 0.0097 | Val rms_score: 0.6235
71
+ 2025-09-26 10:51:54,407 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 64/100 | Train Loss: 0.0103 | Val rms_score: 0.6078
72
+ 2025-09-26 10:51:56,724 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 65/100 | Train Loss: 0.0094 | Val rms_score: 0.5881
73
+ 2025-09-26 10:51:59,056 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 66/100 | Train Loss: 0.0120 | Val rms_score: 0.6153
74
+ 2025-09-26 10:52:01,808 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 67/100 | Train Loss: 0.0119 | Val rms_score: 0.6019
75
+ 2025-09-26 10:52:04,195 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 68/100 | Train Loss: 0.0128 | Val rms_score: 0.6002
76
+ 2025-09-26 10:52:06,248 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 69/100 | Train Loss: 0.0094 | Val rms_score: 0.5864
77
+ 2025-09-26 10:52:08,136 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 70/100 | Train Loss: 0.0117 | Val rms_score: 0.5847
78
+ 2025-09-26 10:52:10,312 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 71/100 | Train Loss: 0.0100 | Val rms_score: 0.5997
79
+ 2025-09-26 10:52:12,971 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 72/100 | Train Loss: 0.0095 | Val rms_score: 0.6139
80
+ 2025-09-26 10:52:15,188 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 73/100 | Train Loss: 0.0104 | Val rms_score: 0.6370
81
+ 2025-09-26 10:52:17,552 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 74/100 | Train Loss: 0.0101 | Val rms_score: 0.5935
82
+ 2025-09-26 10:52:19,834 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 75/100 | Train Loss: 0.0142 | Val rms_score: 0.6032
83
+ 2025-09-26 10:52:22,263 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 76/100 | Train Loss: 0.0112 | Val rms_score: 0.6029
84
+ 2025-09-26 10:52:24,952 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 77/100 | Train Loss: 0.0106 | Val rms_score: 0.5884
85
+ 2025-09-26 10:52:26,960 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 78/100 | Train Loss: 0.0096 | Val rms_score: 0.5735
86
+ 2025-09-26 10:52:29,318 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 79/100 | Train Loss: 0.0130 | Val rms_score: 0.5800
87
+ 2025-09-26 10:52:31,651 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 80/100 | Train Loss: 0.0083 | Val rms_score: 0.5832
88
+ 2025-09-26 10:52:33,828 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 81/100 | Train Loss: 0.0091 | Val rms_score: 0.5922
89
+ 2025-09-26 10:52:36,413 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 82/100 | Train Loss: 0.0080 | Val rms_score: 0.5832
90
+ 2025-09-26 10:52:38,820 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 83/100 | Train Loss: 0.0094 | Val rms_score: 0.5950
91
+ 2025-09-26 10:52:41,273 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 84/100 | Train Loss: 0.0088 | Val rms_score: 0.6108
92
+ 2025-09-26 10:52:43,546 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 85/100 | Train Loss: 0.0099 | Val rms_score: 0.6027
93
+ 2025-09-26 10:52:45,249 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 86/100 | Train Loss: 0.0080 | Val rms_score: 0.6082
94
+ 2025-09-26 10:52:48,097 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 87/100 | Train Loss: 0.0091 | Val rms_score: 0.6214
95
+ 2025-09-26 10:52:50,572 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 88/100 | Train Loss: 0.0092 | Val rms_score: 0.5860
96
+ 2025-09-26 10:52:53,331 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 89/100 | Train Loss: 0.0146 | Val rms_score: 0.5905
97
+ 2025-09-26 10:52:55,659 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 90/100 | Train Loss: 0.0105 | Val rms_score: 0.5843
98
+ 2025-09-26 10:52:57,962 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 91/100 | Train Loss: 0.0119 | Val rms_score: 0.6029
99
+ 2025-09-26 10:53:00,563 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 92/100 | Train Loss: 0.0133 | Val rms_score: 0.5992
100
+ 2025-09-26 10:53:02,410 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 93/100 | Train Loss: 0.0122 | Val rms_score: 0.5859
101
+ 2025-09-26 10:53:04,350 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 94/100 | Train Loss: 0.0099 | Val rms_score: 0.5880
102
+ 2025-09-26 10:53:06,682 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 95/100 | Train Loss: 0.0110 | Val rms_score: 0.5747
103
+ 2025-09-26 10:53:09,018 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 96/100 | Train Loss: 0.0134 | Val rms_score: 0.5960
104
+ 2025-09-26 10:53:11,708 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 97/100 | Train Loss: 0.0096 | Val rms_score: 0.5913
105
+ 2025-09-26 10:53:14,073 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 98/100 | Train Loss: 0.0104 | Val rms_score: 0.5974
106
+ 2025-09-26 10:53:16,374 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 99/100 | Train Loss: 0.0106 | Val rms_score: 0.5703
107
+ 2025-09-26 10:53:18,718 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 100/100 | Train Loss: 0.0079 | Val rms_score: 0.5642
108
+ 2025-09-26 10:53:19,088 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Test rms_score: 0.9324
109
+ 2025-09-26 10:53:19,456 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Starting triplicate run 2 for dataset adme_ppb_r at 2025-09-26_10-53-19
110
+ 2025-09-26 10:53:20,898 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 1/100 | Train Loss: 1.0500 | Val rms_score: 0.5504
111
+ 2025-09-26 10:53:20,898 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Global step of best model: 5
112
+ 2025-09-26 10:53:21,593 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Best model saved at epoch 1 with val rms_score: 0.5504
113
+ 2025-09-26 10:53:23,911 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 2/100 | Train Loss: 0.4250 | Val rms_score: 0.4592
114
+ 2025-09-26 10:53:24,111 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Global step of best model: 10
115
+ 2025-09-26 10:53:24,678 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Best model saved at epoch 2 with val rms_score: 0.4592
116
+ 2025-09-26 10:53:27,813 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 3/100 | Train Loss: 0.3328 | Val rms_score: 0.3644
117
+ 2025-09-26 10:53:27,996 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Global step of best model: 15
118
+ 2025-09-26 10:53:28,809 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Best model saved at epoch 3 with val rms_score: 0.3644
119
+ 2025-09-26 10:53:31,231 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 4/100 | Train Loss: 0.2687 | Val rms_score: 0.3507
120
+ 2025-09-26 10:53:31,422 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Global step of best model: 20
121
+ 2025-09-26 10:53:32,007 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Best model saved at epoch 4 with val rms_score: 0.3507
122
+ 2025-09-26 10:53:34,384 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 5/100 | Train Loss: 0.2141 | Val rms_score: 0.3443
123
+ 2025-09-26 10:53:34,577 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Global step of best model: 25
124
+ 2025-09-26 10:53:35,152 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Best model saved at epoch 5 with val rms_score: 0.3443
125
+ 2025-09-26 10:53:37,172 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 6/100 | Train Loss: 0.1719 | Val rms_score: 0.3311
126
+ 2025-09-26 10:53:37,701 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Global step of best model: 30
127
+ 2025-09-26 10:53:38,267 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Best model saved at epoch 6 with val rms_score: 0.3311
128
+ 2025-09-26 10:53:40,834 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 7/100 | Train Loss: 0.1531 | Val rms_score: 0.3730
129
+ 2025-09-26 10:53:43,185 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 8/100 | Train Loss: 0.1461 | Val rms_score: 0.3553
130
+ 2025-09-26 10:53:45,560 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 9/100 | Train Loss: 0.1023 | Val rms_score: 0.3806
131
+ 2025-09-26 10:53:48,178 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 10/100 | Train Loss: 0.1117 | Val rms_score: 0.4079
132
+ 2025-09-26 10:53:50,564 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 11/100 | Train Loss: 0.0863 | Val rms_score: 0.4464
133
+ 2025-09-26 10:53:53,441 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 12/100 | Train Loss: 0.0863 | Val rms_score: 0.4690
134
+ 2025-09-26 10:53:55,019 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 13/100 | Train Loss: 0.0680 | Val rms_score: 0.4993
135
+ 2025-09-26 10:53:57,828 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 14/100 | Train Loss: 0.0490 | Val rms_score: 0.5096
136
+ 2025-09-26 10:54:00,661 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 15/100 | Train Loss: 0.0416 | Val rms_score: 0.4755
137
+ 2025-09-26 10:54:02,946 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 16/100 | Train Loss: 0.0402 | Val rms_score: 0.4674
138
+ 2025-09-26 10:54:05,532 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 17/100 | Train Loss: 0.0332 | Val rms_score: 0.4821
139
+ 2025-09-26 10:54:07,814 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 18/100 | Train Loss: 0.0303 | Val rms_score: 0.4810
140
+ 2025-09-26 10:54:10,029 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 19/100 | Train Loss: 0.0199 | Val rms_score: 0.4851
141
+ 2025-09-26 10:54:12,056 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 20/100 | Train Loss: 0.0228 | Val rms_score: 0.4951
142
+ 2025-09-26 10:54:14,418 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 21/100 | Train Loss: 0.0205 | Val rms_score: 0.5011
143
+ 2025-09-26 10:54:17,482 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 22/100 | Train Loss: 0.0225 | Val rms_score: 0.5061
144
+ 2025-09-26 10:54:19,958 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 23/100 | Train Loss: 0.0190 | Val rms_score: 0.5038
145
+ 2025-09-26 10:54:22,450 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 24/100 | Train Loss: 0.0182 | Val rms_score: 0.5009
146
+ 2025-09-26 10:54:24,827 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 25/100 | Train Loss: 0.0184 | Val rms_score: 0.5059
147
+ 2025-09-26 10:54:26,930 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 26/100 | Train Loss: 0.0152 | Val rms_score: 0.5089
148
+ 2025-09-26 10:54:29,580 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 27/100 | Train Loss: 0.0168 | Val rms_score: 0.5170
149
+ 2025-09-26 10:54:31,826 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 28/100 | Train Loss: 0.0148 | Val rms_score: 0.5205
150
+ 2025-09-26 10:54:34,511 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 29/100 | Train Loss: 0.0142 | Val rms_score: 0.5046
151
+ 2025-09-26 10:54:36,904 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 30/100 | Train Loss: 0.0137 | Val rms_score: 0.4874
152
+ 2025-09-26 10:54:39,166 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 31/100 | Train Loss: 0.0107 | Val rms_score: 0.4837
153
+ 2025-09-26 10:54:41,751 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 32/100 | Train Loss: 0.0145 | Val rms_score: 0.4889
154
+ 2025-09-26 10:54:44,086 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 33/100 | Train Loss: 0.0118 | Val rms_score: 0.5012
155
+ 2025-09-26 10:54:46,355 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 34/100 | Train Loss: 0.0094 | Val rms_score: 0.4981
156
+ 2025-09-26 10:54:48,728 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 35/100 | Train Loss: 0.0108 | Val rms_score: 0.4952
157
+ 2025-09-26 10:54:51,161 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 36/100 | Train Loss: 0.0117 | Val rms_score: 0.5008
158
+ 2025-09-26 10:54:53,413 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 37/100 | Train Loss: 0.0157 | Val rms_score: 0.5053
159
+ 2025-09-26 10:54:56,177 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 38/100 | Train Loss: 0.0109 | Val rms_score: 0.4845
160
+ 2025-09-26 10:54:58,734 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 39/100 | Train Loss: 0.0099 | Val rms_score: 0.5140
161
+ 2025-09-26 10:55:00,415 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 40/100 | Train Loss: 0.0110 | Val rms_score: 0.5157
162
+ 2025-09-26 10:55:02,803 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 41/100 | Train Loss: 0.0093 | Val rms_score: 0.5029
163
+ 2025-09-26 10:55:05,437 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 42/100 | Train Loss: 0.0116 | Val rms_score: 0.5027
164
+ 2025-09-26 10:55:07,701 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 43/100 | Train Loss: 0.0111 | Val rms_score: 0.5188
165
+ 2025-09-26 10:55:10,033 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 44/100 | Train Loss: 0.0105 | Val rms_score: 0.4872
166
+ 2025-09-26 10:55:12,597 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 45/100 | Train Loss: 0.0114 | Val rms_score: 0.4949
167
+ 2025-09-26 10:55:15,141 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 46/100 | Train Loss: 0.0125 | Val rms_score: 0.4975
168
+ 2025-09-26 10:55:17,182 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 47/100 | Train Loss: 0.0098 | Val rms_score: 0.4734
169
+ 2025-09-26 10:55:19,742 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 48/100 | Train Loss: 0.0114 | Val rms_score: 0.4860
170
+ 2025-09-26 10:55:22,053 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 49/100 | Train Loss: 0.0097 | Val rms_score: 0.4857
171
+ 2025-09-26 10:55:24,514 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 50/100 | Train Loss: 0.0117 | Val rms_score: 0.4679
172
+ 2025-09-26 10:55:26,824 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 51/100 | Train Loss: 0.0075 | Val rms_score: 0.4732
173
+ 2025-09-26 10:55:29,655 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 52/100 | Train Loss: 0.0116 | Val rms_score: 0.4652
174
+ 2025-09-26 10:55:31,918 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 53/100 | Train Loss: 0.0133 | Val rms_score: 0.4738
175
+ 2025-09-26 10:55:34,225 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 54/100 | Train Loss: 0.0104 | Val rms_score: 0.4929
176
+ 2025-09-26 10:55:36,182 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 55/100 | Train Loss: 0.0135 | Val rms_score: 0.4632
177
+ 2025-09-26 10:55:38,466 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 56/100 | Train Loss: 0.0097 | Val rms_score: 0.4750
178
+ 2025-09-26 10:55:41,564 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 57/100 | Train Loss: 0.0139 | Val rms_score: 0.4664
179
+ 2025-09-26 10:55:43,917 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 58/100 | Train Loss: 0.0110 | Val rms_score: 0.4731
180
+ 2025-09-26 10:55:46,261 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 59/100 | Train Loss: 0.0089 | Val rms_score: 0.4869
181
+ 2025-09-26 10:55:48,646 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 60/100 | Train Loss: 0.0103 | Val rms_score: 0.4758
182
+ 2025-09-26 10:55:51,050 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 61/100 | Train Loss: 0.0094 | Val rms_score: 0.4691
183
+ 2025-09-26 10:55:53,035 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 62/100 | Train Loss: 0.0072 | Val rms_score: 0.4729
184
+ 2025-09-26 10:55:55,362 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 63/100 | Train Loss: 0.0096 | Val rms_score: 0.4818
185
+ 2025-09-26 10:55:57,675 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 64/100 | Train Loss: 0.0081 | Val rms_score: 0.4960
186
+ 2025-09-26 10:56:00,242 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 65/100 | Train Loss: 0.0086 | Val rms_score: 0.4807
187
+ 2025-09-26 10:56:02,929 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 66/100 | Train Loss: 0.0099 | Val rms_score: 0.4926
188
+ 2025-09-26 10:56:05,489 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 67/100 | Train Loss: 0.0104 | Val rms_score: 0.4982
189
+ 2025-09-26 10:56:07,842 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 68/100 | Train Loss: 0.0108 | Val rms_score: 0.4754
190
+ 2025-09-26 10:56:10,245 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 69/100 | Train Loss: 0.0088 | Val rms_score: 0.4964
191
+ 2025-09-26 10:56:12,068 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 70/100 | Train Loss: 0.0090 | Val rms_score: 0.4755
192
+ 2025-09-26 10:56:14,420 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 71/100 | Train Loss: 0.0088 | Val rms_score: 0.4755
193
+ 2025-09-26 10:56:17,062 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 72/100 | Train Loss: 0.0073 | Val rms_score: 0.5019
194
+ 2025-09-26 10:56:19,331 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 73/100 | Train Loss: 0.0086 | Val rms_score: 0.4753
195
+ 2025-09-26 10:56:21,789 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 74/100 | Train Loss: 0.0104 | Val rms_score: 0.4815
196
+ 2025-09-26 10:56:24,112 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 75/100 | Train Loss: 0.0094 | Val rms_score: 0.4899
197
+ 2025-09-26 10:56:26,471 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 76/100 | Train Loss: 0.0141 | Val rms_score: 0.4633
198
+ 2025-09-26 10:56:29,292 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 77/100 | Train Loss: 0.0108 | Val rms_score: 0.4896
199
+ 2025-09-26 10:56:31,299 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 78/100 | Train Loss: 0.0180 | Val rms_score: 0.4424
200
+ 2025-09-26 10:56:33,618 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 79/100 | Train Loss: 0.0148 | Val rms_score: 0.4576
201
+ 2025-09-26 10:56:36,061 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 80/100 | Train Loss: 0.0119 | Val rms_score: 0.4700
202
+ 2025-09-26 10:56:38,384 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 81/100 | Train Loss: 0.0120 | Val rms_score: 0.4574
203
+ 2025-09-26 10:56:41,025 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 82/100 | Train Loss: 0.0119 | Val rms_score: 0.4525
204
+ 2025-09-26 10:56:43,514 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 83/100 | Train Loss: 0.0109 | Val rms_score: 0.4752
205
+ 2025-09-26 10:56:45,814 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 84/100 | Train Loss: 0.0132 | Val rms_score: 0.4794
206
+ 2025-09-26 10:56:47,840 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 85/100 | Train Loss: 0.0139 | Val rms_score: 0.4573
207
+ 2025-09-26 10:56:49,674 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 86/100 | Train Loss: 0.0081 | Val rms_score: 0.4818
208
+ 2025-09-26 10:56:52,397 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 87/100 | Train Loss: 0.0111 | Val rms_score: 0.4660
209
+ 2025-09-26 10:56:54,779 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 88/100 | Train Loss: 0.0099 | Val rms_score: 0.4752
210
+ 2025-09-26 10:56:57,227 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 89/100 | Train Loss: 0.0088 | Val rms_score: 0.4761
211
+ 2025-09-26 10:56:59,655 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 90/100 | Train Loss: 0.0092 | Val rms_score: 0.4458
212
+ 2025-09-26 10:57:02,297 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 91/100 | Train Loss: 0.0084 | Val rms_score: 0.4628
213
+ 2025-09-26 10:57:04,942 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 92/100 | Train Loss: 0.0087 | Val rms_score: 0.4609
214
+ 2025-09-26 10:57:06,588 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 93/100 | Train Loss: 0.0090 | Val rms_score: 0.4531
215
+ 2025-09-26 10:57:08,972 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 94/100 | Train Loss: 0.0089 | Val rms_score: 0.4628
216
+ 2025-09-26 10:57:11,448 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 95/100 | Train Loss: 0.0078 | Val rms_score: 0.4691
217
+ 2025-09-26 10:57:13,913 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 96/100 | Train Loss: 0.0079 | Val rms_score: 0.4731
218
+ 2025-09-26 10:57:16,755 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 97/100 | Train Loss: 0.0080 | Val rms_score: 0.4699
219
+ 2025-09-26 10:57:19,051 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 98/100 | Train Loss: 0.0068 | Val rms_score: 0.4672
220
+ 2025-09-26 10:57:21,489 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 99/100 | Train Loss: 0.0091 | Val rms_score: 0.4530
221
+ 2025-09-26 10:57:23,796 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 100/100 | Train Loss: 0.0077 | Val rms_score: 0.4553
222
+ 2025-09-26 10:57:24,122 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Test rms_score: 0.7480
223
+ 2025-09-26 10:57:24,477 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Starting triplicate run 3 for dataset adme_ppb_r at 2025-09-26_10-57-24
224
+ 2025-09-26 10:57:26,908 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 1/100 | Train Loss: 1.7000 | Val rms_score: 0.5940
225
+ 2025-09-26 10:57:26,908 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Global step of best model: 5
226
+ 2025-09-26 10:57:27,532 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Best model saved at epoch 1 with val rms_score: 0.5940
227
+ 2025-09-26 10:57:30,324 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 2/100 | Train Loss: 0.3875 | Val rms_score: 0.5015
228
+ 2025-09-26 10:57:30,509 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Global step of best model: 10
229
+ 2025-09-26 10:57:31,080 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Best model saved at epoch 2 with val rms_score: 0.5015
230
+ 2025-09-26 10:57:33,325 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 3/100 | Train Loss: 0.3250 | Val rms_score: 0.5147
231
+ 2025-09-26 10:57:35,654 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 4/100 | Train Loss: 0.3125 | Val rms_score: 0.4869
232
+ 2025-09-26 10:57:35,843 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Global step of best model: 20
233
+ 2025-09-26 10:57:36,415 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Best model saved at epoch 4 with val rms_score: 0.4869
234
+ 2025-09-26 10:57:38,978 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 5/100 | Train Loss: 0.1867 | Val rms_score: 0.4892
235
+ 2025-09-26 10:57:40,632 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 6/100 | Train Loss: 0.2437 | Val rms_score: 0.4644
236
+ 2025-09-26 10:57:41,189 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Global step of best model: 30
237
+ 2025-09-26 10:57:41,753 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Best model saved at epoch 6 with val rms_score: 0.4644
238
+ 2025-09-26 10:57:44,614 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 7/100 | Train Loss: 0.1391 | Val rms_score: 0.4953
239
+ 2025-09-26 10:57:46,393 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 8/100 | Train Loss: 0.1445 | Val rms_score: 0.4923
240
+ 2025-09-26 10:57:48,660 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 9/100 | Train Loss: 0.1133 | Val rms_score: 0.4708
241
+ 2025-09-26 10:57:51,079 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 10/100 | Train Loss: 0.0918 | Val rms_score: 0.4677
242
+ 2025-09-26 10:57:53,335 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 11/100 | Train Loss: 0.0781 | Val rms_score: 0.4807
243
+ 2025-09-26 10:57:55,981 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 12/100 | Train Loss: 0.0703 | Val rms_score: 0.4921
244
+ 2025-09-26 10:57:58,305 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 13/100 | Train Loss: 0.0625 | Val rms_score: 0.4948
245
+ 2025-09-26 10:58:00,400 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 14/100 | Train Loss: 0.0688 | Val rms_score: 0.5023
246
+ 2025-09-26 10:58:02,319 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 15/100 | Train Loss: 0.0523 | Val rms_score: 0.5445
247
+ 2025-09-26 10:58:04,871 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 16/100 | Train Loss: 0.0477 | Val rms_score: 0.5481
248
+ 2025-09-26 10:58:07,556 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 17/100 | Train Loss: 0.0523 | Val rms_score: 0.5273
249
+ 2025-09-26 10:58:10,351 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 18/100 | Train Loss: 0.0389 | Val rms_score: 0.5285
250
+ 2025-09-26 10:58:13,036 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 19/100 | Train Loss: 0.0365 | Val rms_score: 0.5335
251
+ 2025-09-26 10:58:15,738 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 20/100 | Train Loss: 0.0328 | Val rms_score: 0.5275
252
+ 2025-09-26 10:58:18,090 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 21/100 | Train Loss: 0.0344 | Val rms_score: 0.5256
253
+ 2025-09-26 10:58:20,673 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 22/100 | Train Loss: 0.0309 | Val rms_score: 0.5361
254
+ 2025-09-26 10:58:23,049 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 23/100 | Train Loss: 0.0256 | Val rms_score: 0.5375
255
+ 2025-09-26 10:58:25,535 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 24/100 | Train Loss: 0.0262 | Val rms_score: 0.5490
256
+ 2025-09-26 10:58:27,937 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 25/100 | Train Loss: 0.0227 | Val rms_score: 0.5678
257
+ 2025-09-26 10:58:30,117 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 26/100 | Train Loss: 0.0247 | Val rms_score: 0.5772
258
+ 2025-09-26 10:58:32,453 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 27/100 | Train Loss: 0.0270 | Val rms_score: 0.5650
259
+ 2025-09-26 10:58:35,001 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 28/100 | Train Loss: 0.0194 | Val rms_score: 0.5618
260
+ 2025-09-26 10:58:37,193 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 29/100 | Train Loss: 0.0224 | Val rms_score: 0.5655
261
+ 2025-09-26 10:58:39,699 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 30/100 | Train Loss: 0.0218 | Val rms_score: 0.5835
262
+ 2025-09-26 10:58:42,502 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 31/100 | Train Loss: 0.0184 | Val rms_score: 0.5912
263
+ 2025-09-26 10:58:45,121 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 32/100 | Train Loss: 0.0187 | Val rms_score: 0.5719
264
+ 2025-09-26 10:58:47,397 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 33/100 | Train Loss: 0.0175 | Val rms_score: 0.5870
265
+ 2025-09-26 10:58:49,185 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 34/100 | Train Loss: 0.0216 | Val rms_score: 0.5795
266
+ 2025-09-26 10:58:51,694 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 35/100 | Train Loss: 0.0143 | Val rms_score: 0.5827
267
+ 2025-09-26 10:58:53,939 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 36/100 | Train Loss: 0.0142 | Val rms_score: 0.6002
268
+ 2025-09-26 10:58:56,571 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 37/100 | Train Loss: 0.0121 | Val rms_score: 0.6121
269
+ 2025-09-26 10:58:59,327 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 38/100 | Train Loss: 0.0155 | Val rms_score: 0.5990
270
+ 2025-09-26 10:59:01,572 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 39/100 | Train Loss: 0.0142 | Val rms_score: 0.5721
271
+ 2025-09-26 10:59:03,877 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 40/100 | Train Loss: 0.0177 | Val rms_score: 0.5690
272
+ 2025-09-26 10:59:05,651 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 41/100 | Train Loss: 0.0156 | Val rms_score: 0.5653
273
+ 2025-09-26 10:59:08,657 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 42/100 | Train Loss: 0.0123 | Val rms_score: 0.5627
274
+ 2025-09-26 10:59:11,086 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 43/100 | Train Loss: 0.0131 | Val rms_score: 0.5743
275
+ 2025-09-26 10:59:13,327 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 44/100 | Train Loss: 0.0186 | Val rms_score: 0.5507
276
+ 2025-09-26 10:59:15,747 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 45/100 | Train Loss: 0.0160 | Val rms_score: 0.5405
277
+ 2025-09-26 10:59:18,217 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 46/100 | Train Loss: 0.0163 | Val rms_score: 0.5757
278
+ 2025-09-26 10:59:20,953 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 47/100 | Train Loss: 0.0163 | Val rms_score: 0.5589
279
+ 2025-09-26 10:59:22,668 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 48/100 | Train Loss: 0.0135 | Val rms_score: 0.5632
280
+ 2025-09-26 10:59:24,965 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 49/100 | Train Loss: 0.0178 | Val rms_score: 0.5697
281
+ 2025-09-26 10:59:27,380 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 50/100 | Train Loss: 0.0187 | Val rms_score: 0.5505
282
+ 2025-09-26 10:59:29,672 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 51/100 | Train Loss: 0.0164 | Val rms_score: 0.5701
283
+ 2025-09-26 10:59:32,427 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 52/100 | Train Loss: 0.0129 | Val rms_score: 0.5529
284
+ 2025-09-26 10:59:34,684 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 53/100 | Train Loss: 0.0126 | Val rms_score: 0.5570
285
+ 2025-09-26 10:59:37,089 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 54/100 | Train Loss: 0.0121 | Val rms_score: 0.5763
286
+ 2025-09-26 10:59:39,290 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 55/100 | Train Loss: 0.0120 | Val rms_score: 0.5710
287
+ 2025-09-26 10:59:41,129 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 56/100 | Train Loss: 0.0113 | Val rms_score: 0.5610
288
+ 2025-09-26 10:59:44,483 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 57/100 | Train Loss: 0.0123 | Val rms_score: 0.5813
289
+ 2025-09-26 10:59:47,444 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 58/100 | Train Loss: 0.0106 | Val rms_score: 0.5718
290
+ 2025-09-26 10:59:49,936 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 59/100 | Train Loss: 0.0137 | Val rms_score: 0.5463
291
+ 2025-09-26 10:59:52,250 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 60/100 | Train Loss: 0.0113 | Val rms_score: 0.5546
292
+ 2025-09-26 10:59:54,539 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 61/100 | Train Loss: 0.0112 | Val rms_score: 0.5640
293
+ 2025-09-26 10:59:57,216 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 62/100 | Train Loss: 0.0124 | Val rms_score: 0.5534
294
+ 2025-09-26 10:59:59,080 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 63/100 | Train Loss: 0.0112 | Val rms_score: 0.5453
295
+ 2025-09-26 11:00:01,425 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 64/100 | Train Loss: 0.0098 | Val rms_score: 0.5384
296
+ 2025-09-26 11:00:03,783 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 65/100 | Train Loss: 0.0120 | Val rms_score: 0.5516
297
+ 2025-09-26 11:00:06,186 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 66/100 | Train Loss: 0.0127 | Val rms_score: 0.5569
298
+ 2025-09-26 11:00:08,820 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 67/100 | Train Loss: 0.0083 | Val rms_score: 0.5651
299
+ 2025-09-26 11:00:11,105 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 68/100 | Train Loss: 0.0098 | Val rms_score: 0.5659
300
+ 2025-09-26 11:00:13,408 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 69/100 | Train Loss: 0.0089 | Val rms_score: 0.5507
301
+ 2025-09-26 11:00:15,805 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 70/100 | Train Loss: 0.0104 | Val rms_score: 0.5555
302
+ 2025-09-26 11:00:17,571 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 71/100 | Train Loss: 0.0118 | Val rms_score: 0.5618
303
+ 2025-09-26 11:00:20,482 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 72/100 | Train Loss: 0.0117 | Val rms_score: 0.5614
304
+ 2025-09-26 11:00:22,779 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 73/100 | Train Loss: 0.0123 | Val rms_score: 0.5623
305
+ 2025-09-26 11:00:25,102 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 74/100 | Train Loss: 0.0096 | Val rms_score: 0.5547
306
+ 2025-09-26 11:00:27,586 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 75/100 | Train Loss: 0.0081 | Val rms_score: 0.5498
307
+ 2025-09-26 11:00:29,406 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 76/100 | Train Loss: 0.0102 | Val rms_score: 0.5419
308
+ 2025-09-26 11:00:32,116 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 77/100 | Train Loss: 0.0098 | Val rms_score: 0.5432
309
+ 2025-09-26 11:00:34,512 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 78/100 | Train Loss: 0.0107 | Val rms_score: 0.5383
310
+ 2025-09-26 11:00:36,406 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 79/100 | Train Loss: 0.0102 | Val rms_score: 0.5494
311
+ 2025-09-26 11:00:38,771 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 80/100 | Train Loss: 0.0086 | Val rms_score: 0.5486
312
+ 2025-09-26 11:00:41,151 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 81/100 | Train Loss: 0.0081 | Val rms_score: 0.5327
313
+ 2025-09-26 11:00:43,914 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 82/100 | Train Loss: 0.0093 | Val rms_score: 0.5384
314
+ 2025-09-26 11:00:46,282 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 83/100 | Train Loss: 0.0103 | Val rms_score: 0.5514
315
+ 2025-09-26 11:00:48,613 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 84/100 | Train Loss: 0.0111 | Val rms_score: 0.5479
316
+ 2025-09-26 11:00:50,963 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 85/100 | Train Loss: 0.0100 | Val rms_score: 0.5451
317
+ 2025-09-26 11:00:53,320 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 86/100 | Train Loss: 0.0119 | Val rms_score: 0.5352
318
+ 2025-09-26 11:00:56,245 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 87/100 | Train Loss: 0.0103 | Val rms_score: 0.5387
319
+ 2025-09-26 11:00:58,777 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 88/100 | Train Loss: 0.0085 | Val rms_score: 0.5383
320
+ 2025-09-26 11:01:01,117 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 89/100 | Train Loss: 0.0105 | Val rms_score: 0.5370
321
+ 2025-09-26 11:01:03,459 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 90/100 | Train Loss: 0.0080 | Val rms_score: 0.5352
322
+ 2025-09-26 11:01:05,876 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 91/100 | Train Loss: 0.0082 | Val rms_score: 0.5253
323
+ 2025-09-26 11:01:08,530 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 92/100 | Train Loss: 0.0102 | Val rms_score: 0.5255
324
+ 2025-09-26 11:01:10,827 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 93/100 | Train Loss: 0.0112 | Val rms_score: 0.5343
325
+ 2025-09-26 11:01:12,496 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 94/100 | Train Loss: 0.0103 | Val rms_score: 0.5317
326
+ 2025-09-26 11:01:15,195 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 95/100 | Train Loss: 0.0075 | Val rms_score: 0.5428
327
+ 2025-09-26 11:01:17,779 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 96/100 | Train Loss: 0.0106 | Val rms_score: 0.5458
328
+ 2025-09-26 11:01:20,563 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 97/100 | Train Loss: 0.0105 | Val rms_score: 0.5380
329
+ 2025-09-26 11:01:22,947 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 98/100 | Train Loss: 0.0145 | Val rms_score: 0.5592
330
+ 2025-09-26 11:01:25,370 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 99/100 | Train Loss: 0.0081 | Val rms_score: 0.5628
331
+ 2025-09-26 11:01:27,850 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 100/100 | Train Loss: 0.0109 | Val rms_score: 0.5602
332
+ 2025-09-26 11:01:28,268 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Test rms_score: 0.8534
333
+ 2025-09-26 11:01:28,667 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Final Triplicate Test Results — Avg rms_score: 0.8446, Std Dev: 0.0756
logs_modchembert_regression_ModChemBERT-MLM-DAPT/modchembert_deepchem_splits_run_adme_solubility_epochs100_batch_size32_20250926_110128.log ADDED
@@ -0,0 +1,341 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ 2025-09-26 11:01:28,668 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Running benchmark for dataset: adme_solubility
2
+ 2025-09-26 11:01:28,668 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - dataset: adme_solubility, tasks: ['y'], epochs: 100, learning rate: 3e-05, transform: True
3
+ 2025-09-26 11:01:28,673 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Starting triplicate run 1 for dataset adme_solubility at 2025-09-26_11-01-28
4
+ 2025-09-26 11:01:39,654 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 1/100 | Train Loss: 0.8091 | Val rms_score: 0.4178
5
+ 2025-09-26 11:01:39,654 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Global step of best model: 55
6
+ 2025-09-26 11:01:40,266 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 1 with val rms_score: 0.4178
7
+ 2025-09-26 11:01:53,412 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 2/100 | Train Loss: 0.5031 | Val rms_score: 0.4562
8
+ 2025-09-26 11:02:06,058 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 3/100 | Train Loss: 0.4727 | Val rms_score: 0.4249
9
+ 2025-09-26 11:02:18,858 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 4/100 | Train Loss: 0.4437 | Val rms_score: 0.4601
10
+ 2025-09-26 11:02:30,125 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 5/100 | Train Loss: 0.3818 | Val rms_score: 0.3964
11
+ 2025-09-26 11:02:30,281 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Global step of best model: 275
12
+ 2025-09-26 11:02:30,857 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 5 with val rms_score: 0.3964
13
+ 2025-09-26 11:02:45,916 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 6/100 | Train Loss: 0.2708 | Val rms_score: 0.4122
14
+ 2025-09-26 11:02:59,079 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 7/100 | Train Loss: 0.1909 | Val rms_score: 0.4287
15
+ 2025-09-26 11:03:14,106 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 8/100 | Train Loss: 0.1703 | Val rms_score: 0.4195
16
+ 2025-09-26 11:03:26,893 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 9/100 | Train Loss: 0.1295 | Val rms_score: 0.4348
17
+ 2025-09-26 11:03:42,109 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 10/100 | Train Loss: 0.1000 | Val rms_score: 0.4355
18
+ 2025-09-26 11:03:56,031 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 11/100 | Train Loss: 0.0777 | Val rms_score: 0.4011
19
+ 2025-09-26 11:04:11,594 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 12/100 | Train Loss: 0.0744 | Val rms_score: 0.4090
20
+ 2025-09-26 11:04:26,002 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 13/100 | Train Loss: 0.0630 | Val rms_score: 0.4341
21
+ 2025-09-26 11:04:40,130 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 14/100 | Train Loss: 0.0628 | Val rms_score: 0.4123
22
+ 2025-09-26 11:04:54,348 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 15/100 | Train Loss: 0.0550 | Val rms_score: 0.3994
23
+ 2025-09-26 11:05:09,891 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 16/100 | Train Loss: 0.0474 | Val rms_score: 0.4182
24
+ 2025-09-26 11:05:24,866 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 17/100 | Train Loss: 0.0507 | Val rms_score: 0.4156
25
+ 2025-09-26 11:05:40,280 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 18/100 | Train Loss: 0.0403 | Val rms_score: 0.4041
26
+ 2025-09-26 11:05:56,094 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 19/100 | Train Loss: 0.0444 | Val rms_score: 0.4003
27
+ 2025-09-26 11:06:11,247 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 20/100 | Train Loss: 0.0375 | Val rms_score: 0.4014
28
+ 2025-09-26 11:06:25,910 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 21/100 | Train Loss: 0.0395 | Val rms_score: 0.3991
29
+ 2025-09-26 11:06:41,257 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 22/100 | Train Loss: 0.0391 | Val rms_score: 0.3863
30
+ 2025-09-26 11:06:41,413 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Global step of best model: 1210
31
+ 2025-09-26 11:06:41,933 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 22 with val rms_score: 0.3863
32
+ 2025-09-26 11:06:56,406 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 23/100 | Train Loss: 0.0372 | Val rms_score: 0.3870
33
+ 2025-09-26 11:07:10,122 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 24/100 | Train Loss: 0.0371 | Val rms_score: 0.4116
34
+ 2025-09-26 11:07:25,855 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 25/100 | Train Loss: 0.0293 | Val rms_score: 0.3997
35
+ 2025-09-26 11:07:40,134 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 26/100 | Train Loss: 0.0324 | Val rms_score: 0.3852
36
+ 2025-09-26 11:07:40,593 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Global step of best model: 1430
37
+ 2025-09-26 11:07:41,173 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 26 with val rms_score: 0.3852
38
+ 2025-09-26 11:07:56,672 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 27/100 | Train Loss: 0.0293 | Val rms_score: 0.3951
39
+ 2025-09-26 11:08:08,879 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 28/100 | Train Loss: 0.0311 | Val rms_score: 0.4005
40
+ 2025-09-26 11:08:24,098 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 29/100 | Train Loss: 0.0284 | Val rms_score: 0.3843
41
+ 2025-09-26 11:08:24,252 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Global step of best model: 1595
42
+ 2025-09-26 11:08:24,934 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 29 with val rms_score: 0.3843
43
+ 2025-09-26 11:08:37,578 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 30/100 | Train Loss: 0.0272 | Val rms_score: 0.3806
44
+ 2025-09-26 11:08:37,766 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Global step of best model: 1650
45
+ 2025-09-26 11:08:38,417 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 30 with val rms_score: 0.3806
46
+ 2025-09-26 11:08:53,905 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 31/100 | Train Loss: 0.0262 | Val rms_score: 0.3848
47
+ 2025-09-26 11:09:07,955 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 32/100 | Train Loss: 0.0256 | Val rms_score: 0.3913
48
+ 2025-09-26 11:09:23,411 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 33/100 | Train Loss: 0.0233 | Val rms_score: 0.3906
49
+ 2025-09-26 11:09:37,547 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 34/100 | Train Loss: 0.0226 | Val rms_score: 0.3771
50
+ 2025-09-26 11:09:37,712 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Global step of best model: 1870
51
+ 2025-09-26 11:09:38,273 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 34 with val rms_score: 0.3771
52
+ 2025-09-26 11:09:53,597 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 35/100 | Train Loss: 0.0253 | Val rms_score: 0.3819
53
+ 2025-09-26 11:10:07,875 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 36/100 | Train Loss: 0.0229 | Val rms_score: 0.3821
54
+ 2025-09-26 11:10:24,475 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 37/100 | Train Loss: 0.0217 | Val rms_score: 0.3912
55
+ 2025-09-26 11:10:39,236 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 38/100 | Train Loss: 0.0210 | Val rms_score: 0.3874
56
+ 2025-09-26 11:10:54,719 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 39/100 | Train Loss: 0.0207 | Val rms_score: 0.3840
57
+ 2025-09-26 11:11:09,696 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 40/100 | Train Loss: 0.0214 | Val rms_score: 0.3876
58
+ 2025-09-26 11:11:24,333 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 41/100 | Train Loss: 0.0193 | Val rms_score: 0.3905
59
+ 2025-09-26 11:11:36,669 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 42/100 | Train Loss: 0.0203 | Val rms_score: 0.3888
60
+ 2025-09-26 11:11:52,110 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 43/100 | Train Loss: 0.0220 | Val rms_score: 0.3864
61
+ 2025-09-26 11:12:06,803 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 44/100 | Train Loss: 0.0204 | Val rms_score: 0.3855
62
+ 2025-09-26 11:12:21,891 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 45/100 | Train Loss: 0.0209 | Val rms_score: 0.3859
63
+ 2025-09-26 11:12:37,120 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 46/100 | Train Loss: 0.0201 | Val rms_score: 0.3822
64
+ 2025-09-26 11:12:50,559 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 47/100 | Train Loss: 0.0203 | Val rms_score: 0.3881
65
+ 2025-09-26 11:13:05,810 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 48/100 | Train Loss: 0.0180 | Val rms_score: 0.3841
66
+ 2025-09-26 11:13:18,634 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 49/100 | Train Loss: 0.0195 | Val rms_score: 0.3884
67
+ 2025-09-26 11:13:34,147 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 50/100 | Train Loss: 0.0184 | Val rms_score: 0.3889
68
+ 2025-09-26 11:13:47,197 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 51/100 | Train Loss: 0.0162 | Val rms_score: 0.3828
69
+ 2025-09-26 11:14:02,773 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 52/100 | Train Loss: 0.0183 | Val rms_score: 0.3822
70
+ 2025-09-26 11:14:17,153 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 53/100 | Train Loss: 0.0150 | Val rms_score: 0.3848
71
+ 2025-09-26 11:14:32,624 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 54/100 | Train Loss: 0.0178 | Val rms_score: 0.3875
72
+ 2025-09-26 11:14:48,068 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 55/100 | Train Loss: 0.0197 | Val rms_score: 0.3833
73
+ 2025-09-26 11:15:03,620 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 56/100 | Train Loss: 0.0206 | Val rms_score: 0.3848
74
+ 2025-09-26 11:15:17,574 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 57/100 | Train Loss: 0.0174 | Val rms_score: 0.3842
75
+ 2025-09-26 11:15:32,529 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 58/100 | Train Loss: 0.0157 | Val rms_score: 0.3973
76
+ 2025-09-26 11:15:47,756 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 59/100 | Train Loss: 0.0174 | Val rms_score: 0.3834
77
+ 2025-09-26 11:16:01,757 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 60/100 | Train Loss: 0.0164 | Val rms_score: 0.3867
78
+ 2025-09-26 11:16:17,008 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 61/100 | Train Loss: 0.0159 | Val rms_score: 0.3901
79
+ 2025-09-26 11:16:31,641 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 62/100 | Train Loss: 0.0193 | Val rms_score: 0.3813
80
+ 2025-09-26 11:16:46,983 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 63/100 | Train Loss: 0.0144 | Val rms_score: 0.3777
81
+ 2025-09-26 11:17:00,094 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 64/100 | Train Loss: 0.0168 | Val rms_score: 0.3865
82
+ 2025-09-26 11:17:15,525 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 65/100 | Train Loss: 0.0170 | Val rms_score: 0.3834
83
+ 2025-09-26 11:17:29,751 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 66/100 | Train Loss: 0.0164 | Val rms_score: 0.3826
84
+ 2025-09-26 11:17:45,299 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 67/100 | Train Loss: 0.0162 | Val rms_score: 0.3881
85
+ 2025-09-26 11:17:59,363 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 68/100 | Train Loss: 0.0166 | Val rms_score: 0.3900
86
+ 2025-09-26 11:18:14,338 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 69/100 | Train Loss: 0.0163 | Val rms_score: 0.3824
87
+ 2025-09-26 11:18:28,748 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 70/100 | Train Loss: 0.0147 | Val rms_score: 0.3771
88
+ 2025-09-26 11:18:44,181 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 71/100 | Train Loss: 0.0152 | Val rms_score: 0.3849
89
+ 2025-09-26 11:18:59,145 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 72/100 | Train Loss: 0.0143 | Val rms_score: 0.3845
90
+ 2025-09-26 11:19:15,232 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 73/100 | Train Loss: 0.0131 | Val rms_score: 0.3912
91
+ 2025-09-26 11:19:29,405 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 74/100 | Train Loss: 0.0135 | Val rms_score: 0.3835
92
+ 2025-09-26 11:19:44,711 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 75/100 | Train Loss: 0.0152 | Val rms_score: 0.3905
93
+ 2025-09-26 11:19:59,365 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 76/100 | Train Loss: 0.0157 | Val rms_score: 0.3875
94
+ 2025-09-26 11:20:15,140 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 77/100 | Train Loss: 0.0146 | Val rms_score: 0.3875
95
+ 2025-09-26 11:20:29,604 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 78/100 | Train Loss: 0.0139 | Val rms_score: 0.3816
96
+ 2025-09-26 11:20:44,632 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 79/100 | Train Loss: 0.0149 | Val rms_score: 0.3827
97
+ 2025-09-26 11:20:59,429 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 80/100 | Train Loss: 0.0136 | Val rms_score: 0.3817
98
+ 2025-09-26 11:21:14,666 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 81/100 | Train Loss: 0.0157 | Val rms_score: 0.3867
99
+ 2025-09-26 11:21:30,177 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 82/100 | Train Loss: 0.0137 | Val rms_score: 0.3755
100
+ 2025-09-26 11:21:30,339 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Global step of best model: 4510
101
+ 2025-09-26 11:21:30,905 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 82 with val rms_score: 0.3755
102
+ 2025-09-26 11:21:46,472 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 83/100 | Train Loss: 0.0146 | Val rms_score: 0.3870
103
+ 2025-09-26 11:22:00,454 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 84/100 | Train Loss: 0.0129 | Val rms_score: 0.3787
104
+ 2025-09-26 11:22:14,743 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 85/100 | Train Loss: 0.0143 | Val rms_score: 0.3905
105
+ 2025-09-26 11:22:30,149 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 86/100 | Train Loss: 0.0141 | Val rms_score: 0.3849
106
+ 2025-09-26 11:22:42,810 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 87/100 | Train Loss: 0.0133 | Val rms_score: 0.3841
107
+ 2025-09-26 11:22:58,282 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 88/100 | Train Loss: 0.0127 | Val rms_score: 0.3773
108
+ 2025-09-26 11:23:12,574 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 89/100 | Train Loss: 0.0134 | Val rms_score: 0.3852
109
+ 2025-09-26 11:23:28,115 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 90/100 | Train Loss: 0.0125 | Val rms_score: 0.3867
110
+ 2025-09-26 11:23:43,014 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 91/100 | Train Loss: 0.0155 | Val rms_score: 0.3794
111
+ 2025-09-26 11:23:58,648 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 92/100 | Train Loss: 0.0135 | Val rms_score: 0.3902
112
+ 2025-09-26 11:24:13,221 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 93/100 | Train Loss: 0.0136 | Val rms_score: 0.3884
113
+ 2025-09-26 11:24:28,497 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 94/100 | Train Loss: 0.0146 | Val rms_score: 0.3864
114
+ 2025-09-26 11:24:42,936 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 95/100 | Train Loss: 0.0136 | Val rms_score: 0.3765
115
+ 2025-09-26 11:24:58,355 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 96/100 | Train Loss: 0.0129 | Val rms_score: 0.3812
116
+ 2025-09-26 11:25:12,662 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 97/100 | Train Loss: 0.0132 | Val rms_score: 0.3833
117
+ 2025-09-26 11:25:28,034 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 98/100 | Train Loss: 0.0121 | Val rms_score: 0.3913
118
+ 2025-09-26 11:25:42,887 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 99/100 | Train Loss: 0.0127 | Val rms_score: 0.3879
119
+ 2025-09-26 11:25:58,425 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 100/100 | Train Loss: 0.0115 | Val rms_score: 0.3742
120
+ 2025-09-26 11:25:58,581 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Global step of best model: 5500
121
+ 2025-09-26 11:25:59,280 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 100 with val rms_score: 0.3742
122
+ 2025-09-26 11:25:59,934 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Test rms_score: 0.4961
123
+ 2025-09-26 11:26:00,345 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Starting triplicate run 2 for dataset adme_solubility at 2025-09-26_11-26-00
124
+ 2025-09-26 11:26:14,278 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 1/100 | Train Loss: 0.7864 | Val rms_score: 0.4466
125
+ 2025-09-26 11:26:14,279 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Global step of best model: 55
126
+ 2025-09-26 11:26:14,986 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 1 with val rms_score: 0.4466
127
+ 2025-09-26 11:26:30,479 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 2/100 | Train Loss: 0.5813 | Val rms_score: 0.4061
128
+ 2025-09-26 11:26:30,655 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Global step of best model: 110
129
+ 2025-09-26 11:26:31,213 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 2 with val rms_score: 0.4061
130
+ 2025-09-26 11:26:45,343 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 3/100 | Train Loss: 0.4636 | Val rms_score: 0.3748
131
+ 2025-09-26 11:26:45,587 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Global step of best model: 165
132
+ 2025-09-26 11:26:46,274 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 3 with val rms_score: 0.3748
133
+ 2025-09-26 11:27:02,174 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 4/100 | Train Loss: 0.4375 | Val rms_score: 0.4331
134
+ 2025-09-26 11:27:16,812 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 5/100 | Train Loss: 0.3114 | Val rms_score: 0.3690
135
+ 2025-09-26 11:27:16,965 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Global step of best model: 275
136
+ 2025-09-26 11:27:17,521 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 5 with val rms_score: 0.3690
137
+ 2025-09-26 11:27:33,532 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 6/100 | Train Loss: 0.2625 | Val rms_score: 0.4104
138
+ 2025-09-26 11:27:48,824 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 7/100 | Train Loss: 0.1818 | Val rms_score: 0.3919
139
+ 2025-09-26 11:28:04,480 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 8/100 | Train Loss: 0.1281 | Val rms_score: 0.4339
140
+ 2025-09-26 11:28:19,329 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 9/100 | Train Loss: 0.1131 | Val rms_score: 0.4057
141
+ 2025-09-26 11:28:35,109 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 10/100 | Train Loss: 0.0881 | Val rms_score: 0.3797
142
+ 2025-09-26 11:28:49,762 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 11/100 | Train Loss: 0.0820 | Val rms_score: 0.3847
143
+ 2025-09-26 11:29:05,371 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 12/100 | Train Loss: 0.0690 | Val rms_score: 0.3905
144
+ 2025-09-26 11:29:18,519 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 13/100 | Train Loss: 0.0646 | Val rms_score: 0.4141
145
+ 2025-09-26 11:29:34,116 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 14/100 | Train Loss: 0.0514 | Val rms_score: 0.3942
146
+ 2025-09-26 11:29:47,965 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 15/100 | Train Loss: 0.0506 | Val rms_score: 0.3930
147
+ 2025-09-26 11:30:03,341 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 16/100 | Train Loss: 0.0435 | Val rms_score: 0.4073
148
+ 2025-09-26 11:30:16,484 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 17/100 | Train Loss: 0.0437 | Val rms_score: 0.3851
149
+ 2025-09-26 11:30:31,668 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 18/100 | Train Loss: 0.0418 | Val rms_score: 0.3835
150
+ 2025-09-26 11:30:46,973 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 19/100 | Train Loss: 0.0399 | Val rms_score: 0.3917
151
+ 2025-09-26 11:31:02,071 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 20/100 | Train Loss: 0.0386 | Val rms_score: 0.3996
152
+ 2025-09-26 11:31:16,785 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 21/100 | Train Loss: 0.0325 | Val rms_score: 0.3936
153
+ 2025-09-26 11:31:32,227 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 22/100 | Train Loss: 0.0299 | Val rms_score: 0.4041
154
+ 2025-09-26 11:31:46,940 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 23/100 | Train Loss: 0.0320 | Val rms_score: 0.4010
155
+ 2025-09-26 11:32:01,855 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 24/100 | Train Loss: 0.0287 | Val rms_score: 0.3917
156
+ 2025-09-26 11:32:16,363 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 25/100 | Train Loss: 0.0341 | Val rms_score: 0.3932
157
+ 2025-09-26 11:32:31,695 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 26/100 | Train Loss: 0.0314 | Val rms_score: 0.4044
158
+ 2025-09-26 11:32:46,655 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 27/100 | Train Loss: 0.0325 | Val rms_score: 0.4020
159
+ 2025-09-26 11:33:02,018 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 28/100 | Train Loss: 0.0289 | Val rms_score: 0.4094
160
+ 2025-09-26 11:33:17,263 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 29/100 | Train Loss: 0.0298 | Val rms_score: 0.3955
161
+ 2025-09-26 11:33:31,832 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 30/100 | Train Loss: 0.0258 | Val rms_score: 0.4077
162
+ 2025-09-26 11:33:47,396 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 31/100 | Train Loss: 0.0218 | Val rms_score: 0.3927
163
+ 2025-09-26 11:34:00,747 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 32/100 | Train Loss: 0.0251 | Val rms_score: 0.3941
164
+ 2025-09-26 11:34:16,202 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 33/100 | Train Loss: 0.0272 | Val rms_score: 0.4001
165
+ 2025-09-26 11:34:30,315 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 34/100 | Train Loss: 0.0236 | Val rms_score: 0.3921
166
+ 2025-09-26 11:34:45,126 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 35/100 | Train Loss: 0.0252 | Val rms_score: 0.3848
167
+ 2025-09-26 11:34:59,439 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 36/100 | Train Loss: 0.0232 | Val rms_score: 0.3914
168
+ 2025-09-26 11:35:16,226 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 37/100 | Train Loss: 0.0232 | Val rms_score: 0.3946
169
+ 2025-09-26 11:35:30,531 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 38/100 | Train Loss: 0.0240 | Val rms_score: 0.3877
170
+ 2025-09-26 11:35:46,172 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 39/100 | Train Loss: 0.0231 | Val rms_score: 0.3923
171
+ 2025-09-26 11:35:59,246 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 40/100 | Train Loss: 0.0223 | Val rms_score: 0.3881
172
+ 2025-09-26 11:36:14,746 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 41/100 | Train Loss: 0.0217 | Val rms_score: 0.4022
173
+ 2025-09-26 11:36:28,721 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 42/100 | Train Loss: 0.0192 | Val rms_score: 0.3850
174
+ 2025-09-26 11:36:44,199 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 43/100 | Train Loss: 0.0212 | Val rms_score: 0.3898
175
+ 2025-09-26 11:36:58,688 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 44/100 | Train Loss: 0.0188 | Val rms_score: 0.3908
176
+ 2025-09-26 11:37:14,306 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 45/100 | Train Loss: 0.0193 | Val rms_score: 0.3917
177
+ 2025-09-26 11:37:29,232 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 46/100 | Train Loss: 0.0191 | Val rms_score: 0.3860
178
+ 2025-09-26 11:37:44,674 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 47/100 | Train Loss: 0.0183 | Val rms_score: 0.3947
179
+ 2025-09-26 11:37:59,669 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 48/100 | Train Loss: 0.0175 | Val rms_score: 0.3914
180
+ 2025-09-26 11:38:14,935 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 49/100 | Train Loss: 0.0185 | Val rms_score: 0.3828
181
+ 2025-09-26 11:38:30,269 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 50/100 | Train Loss: 0.0198 | Val rms_score: 0.3932
182
+ 2025-09-26 11:38:44,807 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 51/100 | Train Loss: 0.0199 | Val rms_score: 0.3939
183
+ 2025-09-26 11:39:00,318 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 52/100 | Train Loss: 0.0183 | Val rms_score: 0.3992
184
+ 2025-09-26 11:39:14,511 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 53/100 | Train Loss: 0.0160 | Val rms_score: 0.3923
185
+ 2025-09-26 11:39:29,892 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 54/100 | Train Loss: 0.0174 | Val rms_score: 0.3969
186
+ 2025-09-26 11:39:43,858 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 55/100 | Train Loss: 0.0196 | Val rms_score: 0.3879
187
+ 2025-09-26 11:39:59,217 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 56/100 | Train Loss: 0.0178 | Val rms_score: 0.3870
188
+ 2025-09-26 11:40:13,924 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 57/100 | Train Loss: 0.0165 | Val rms_score: 0.3770
189
+ 2025-09-26 11:40:29,382 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 58/100 | Train Loss: 0.0164 | Val rms_score: 0.3883
190
+ 2025-09-26 11:40:43,913 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 59/100 | Train Loss: 0.0162 | Val rms_score: 0.4034
191
+ 2025-09-26 11:40:58,921 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 60/100 | Train Loss: 0.0167 | Val rms_score: 0.3885
192
+ 2025-09-26 11:41:13,335 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 61/100 | Train Loss: 0.0160 | Val rms_score: 0.3896
193
+ 2025-09-26 11:41:28,280 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 62/100 | Train Loss: 0.0171 | Val rms_score: 0.3911
194
+ 2025-09-26 11:41:42,248 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 63/100 | Train Loss: 0.0148 | Val rms_score: 0.3876
195
+ 2025-09-26 11:41:57,674 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 64/100 | Train Loss: 0.0156 | Val rms_score: 0.3855
196
+ 2025-09-26 11:42:12,304 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 65/100 | Train Loss: 0.0166 | Val rms_score: 0.3907
197
+ 2025-09-26 11:42:27,684 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 66/100 | Train Loss: 0.0159 | Val rms_score: 0.3861
198
+ 2025-09-26 11:42:42,776 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 67/100 | Train Loss: 0.0156 | Val rms_score: 0.3953
199
+ 2025-09-26 11:42:58,246 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 68/100 | Train Loss: 0.0146 | Val rms_score: 0.3871
200
+ 2025-09-26 11:43:12,791 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 69/100 | Train Loss: 0.0141 | Val rms_score: 0.3785
201
+ 2025-09-26 11:43:28,126 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 70/100 | Train Loss: 0.0155 | Val rms_score: 0.3860
202
+ 2025-09-26 11:43:42,959 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 71/100 | Train Loss: 0.0141 | Val rms_score: 0.3949
203
+ 2025-09-26 11:43:58,638 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 72/100 | Train Loss: 0.0130 | Val rms_score: 0.3930
204
+ 2025-09-26 11:44:14,648 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 73/100 | Train Loss: 0.0137 | Val rms_score: 0.3859
205
+ 2025-09-26 11:44:30,301 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 74/100 | Train Loss: 0.0132 | Val rms_score: 0.3910
206
+ 2025-09-26 11:44:45,234 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 75/100 | Train Loss: 0.0147 | Val rms_score: 0.3816
207
+ 2025-09-26 11:45:00,992 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 76/100 | Train Loss: 0.0140 | Val rms_score: 0.3853
208
+ 2025-09-26 11:45:16,402 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 77/100 | Train Loss: 0.0127 | Val rms_score: 0.3856
209
+ 2025-09-26 11:45:31,683 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 78/100 | Train Loss: 0.0129 | Val rms_score: 0.3888
210
+ 2025-09-26 11:45:46,112 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 79/100 | Train Loss: 0.0152 | Val rms_score: 0.3804
211
+ 2025-09-26 11:46:01,512 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 80/100 | Train Loss: 0.0138 | Val rms_score: 0.3847
212
+ 2025-09-26 11:46:15,329 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 81/100 | Train Loss: 0.0137 | Val rms_score: 0.3846
213
+ 2025-09-26 11:46:30,472 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 82/100 | Train Loss: 0.0112 | Val rms_score: 0.3881
214
+ 2025-09-26 11:46:45,058 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 83/100 | Train Loss: 0.0146 | Val rms_score: 0.3915
215
+ 2025-09-26 11:47:00,439 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 84/100 | Train Loss: 0.0156 | Val rms_score: 0.3857
216
+ 2025-09-26 11:47:15,373 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 85/100 | Train Loss: 0.0138 | Val rms_score: 0.3852
217
+ 2025-09-26 11:47:30,550 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 86/100 | Train Loss: 0.0140 | Val rms_score: 0.3932
218
+ 2025-09-26 11:47:45,693 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 87/100 | Train Loss: 0.0128 | Val rms_score: 0.3824
219
+ 2025-09-26 11:48:01,141 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 88/100 | Train Loss: 0.0141 | Val rms_score: 0.3863
220
+ 2025-09-26 11:48:15,544 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 89/100 | Train Loss: 0.0132 | Val rms_score: 0.3814
221
+ 2025-09-26 11:48:30,858 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 90/100 | Train Loss: 0.0130 | Val rms_score: 0.3878
222
+ 2025-09-26 11:48:46,311 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 91/100 | Train Loss: 0.0117 | Val rms_score: 0.3829
223
+ 2025-09-26 11:49:02,403 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 92/100 | Train Loss: 0.0123 | Val rms_score: 0.3838
224
+ 2025-09-26 11:49:17,115 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 93/100 | Train Loss: 0.0145 | Val rms_score: 0.3815
225
+ 2025-09-26 11:49:32,863 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 94/100 | Train Loss: 0.0126 | Val rms_score: 0.3872
226
+ 2025-09-26 11:49:47,312 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 95/100 | Train Loss: 0.0111 | Val rms_score: 0.3873
227
+ 2025-09-26 11:50:02,754 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 96/100 | Train Loss: 0.0128 | Val rms_score: 0.3866
228
+ 2025-09-26 11:50:18,040 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 97/100 | Train Loss: 0.0126 | Val rms_score: 0.3897
229
+ 2025-09-26 11:50:33,864 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 98/100 | Train Loss: 0.0127 | Val rms_score: 0.3826
230
+ 2025-09-26 11:50:48,654 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 99/100 | Train Loss: 0.0130 | Val rms_score: 0.3804
231
+ 2025-09-26 11:51:04,324 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 100/100 | Train Loss: 0.0124 | Val rms_score: 0.3895
232
+ 2025-09-26 11:51:05,377 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Test rms_score: 0.4758
233
+ 2025-09-26 11:51:05,734 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Starting triplicate run 3 for dataset adme_solubility at 2025-09-26_11-51-05
234
+ 2025-09-26 11:51:19,074 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 1/100 | Train Loss: 0.8318 | Val rms_score: 0.4011
235
+ 2025-09-26 11:51:19,074 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Global step of best model: 55
236
+ 2025-09-26 11:51:19,654 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 1 with val rms_score: 0.4011
237
+ 2025-09-26 11:51:35,314 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 2/100 | Train Loss: 0.5844 | Val rms_score: 0.4217
238
+ 2025-09-26 11:51:50,038 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 3/100 | Train Loss: 0.5136 | Val rms_score: 0.5173
239
+ 2025-09-26 11:52:05,497 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 4/100 | Train Loss: 0.3891 | Val rms_score: 0.4561
240
+ 2025-09-26 11:52:19,975 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 5/100 | Train Loss: 0.3295 | Val rms_score: 0.4140
241
+ 2025-09-26 11:52:35,957 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 6/100 | Train Loss: 0.2354 | Val rms_score: 0.4713
242
+ 2025-09-26 11:52:50,859 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 7/100 | Train Loss: 0.2034 | Val rms_score: 0.3814
243
+ 2025-09-26 11:52:51,011 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Global step of best model: 385
244
+ 2025-09-26 11:52:51,563 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 7 with val rms_score: 0.3814
245
+ 2025-09-26 11:53:06,722 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 8/100 | Train Loss: 0.1602 | Val rms_score: 0.4028
246
+ 2025-09-26 11:53:21,268 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 9/100 | Train Loss: 0.1267 | Val rms_score: 0.3828
247
+ 2025-09-26 11:53:36,663 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 10/100 | Train Loss: 0.1062 | Val rms_score: 0.4414
248
+ 2025-09-26 11:53:51,022 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 11/100 | Train Loss: 0.1094 | Val rms_score: 0.4409
249
+ 2025-09-26 11:54:06,758 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 12/100 | Train Loss: 0.0875 | Val rms_score: 0.4034
250
+ 2025-09-26 11:54:21,504 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 13/100 | Train Loss: 0.0760 | Val rms_score: 0.3898
251
+ 2025-09-26 11:54:37,045 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 14/100 | Train Loss: 0.0622 | Val rms_score: 0.3874
252
+ 2025-09-26 11:54:51,704 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 15/100 | Train Loss: 0.0559 | Val rms_score: 0.3973
253
+ 2025-09-26 11:55:07,110 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 16/100 | Train Loss: 0.0540 | Val rms_score: 0.3992
254
+ 2025-09-26 11:55:21,972 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 17/100 | Train Loss: 0.0487 | Val rms_score: 0.3832
255
+ 2025-09-26 11:55:37,375 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 18/100 | Train Loss: 0.0423 | Val rms_score: 0.3864
256
+ 2025-09-26 11:55:52,800 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 19/100 | Train Loss: 0.0352 | Val rms_score: 0.3922
257
+ 2025-09-26 11:56:07,686 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 20/100 | Train Loss: 0.0375 | Val rms_score: 0.3846
258
+ 2025-09-26 11:56:20,965 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 21/100 | Train Loss: 0.0378 | Val rms_score: 0.3980
259
+ 2025-09-26 11:56:36,782 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 22/100 | Train Loss: 0.0389 | Val rms_score: 0.4032
260
+ 2025-09-26 11:56:50,239 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 23/100 | Train Loss: 0.0352 | Val rms_score: 0.4000
261
+ 2025-09-26 11:57:05,824 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 24/100 | Train Loss: 0.0350 | Val rms_score: 0.3726
262
+ 2025-09-26 11:57:06,055 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Global step of best model: 1320
263
+ 2025-09-26 11:57:06,634 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 24 with val rms_score: 0.3726
264
+ 2025-09-26 11:57:21,200 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 25/100 | Train Loss: 0.0307 | Val rms_score: 0.3839
265
+ 2025-09-26 11:57:36,946 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 26/100 | Train Loss: 0.0326 | Val rms_score: 0.3918
266
+ 2025-09-26 11:57:52,470 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 27/100 | Train Loss: 0.0283 | Val rms_score: 0.3834
267
+ 2025-09-26 11:58:08,064 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 28/100 | Train Loss: 0.0291 | Val rms_score: 0.3752
268
+ 2025-09-26 11:58:22,322 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 29/100 | Train Loss: 0.0273 | Val rms_score: 0.3755
269
+ 2025-09-26 11:58:38,428 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 30/100 | Train Loss: 0.0272 | Val rms_score: 0.3885
270
+ 2025-09-26 11:58:53,103 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 31/100 | Train Loss: 0.0328 | Val rms_score: 0.3870
271
+ 2025-09-26 11:59:08,861 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 32/100 | Train Loss: 0.0273 | Val rms_score: 0.3869
272
+ 2025-09-26 11:59:23,147 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 33/100 | Train Loss: 0.0281 | Val rms_score: 0.3896
273
+ 2025-09-26 11:59:38,802 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 34/100 | Train Loss: 0.0241 | Val rms_score: 0.3897
274
+ 2025-09-26 11:59:53,165 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 35/100 | Train Loss: 0.0253 | Val rms_score: 0.3917
275
+ 2025-09-26 12:00:08,811 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 36/100 | Train Loss: 0.0234 | Val rms_score: 0.3879
276
+ 2025-09-26 12:00:24,558 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 37/100 | Train Loss: 0.0214 | Val rms_score: 0.3912
277
+ 2025-09-26 12:00:40,147 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 38/100 | Train Loss: 0.0234 | Val rms_score: 0.3797
278
+ 2025-09-26 12:00:54,451 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 39/100 | Train Loss: 0.0222 | Val rms_score: 0.3884
279
+ 2025-09-26 12:01:09,965 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 40/100 | Train Loss: 0.0226 | Val rms_score: 0.3891
280
+ 2025-09-26 12:01:24,558 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 41/100 | Train Loss: 0.0217 | Val rms_score: 0.3889
281
+ 2025-09-26 12:01:40,118 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 42/100 | Train Loss: 0.0220 | Val rms_score: 0.3894
282
+ 2025-09-26 12:01:55,306 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 43/100 | Train Loss: 0.0222 | Val rms_score: 0.3881
283
+ 2025-09-26 12:02:11,100 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 44/100 | Train Loss: 0.0212 | Val rms_score: 0.3813
284
+ 2025-09-26 12:02:26,656 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 45/100 | Train Loss: 0.0250 | Val rms_score: 0.3783
285
+ 2025-09-26 12:02:42,134 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 46/100 | Train Loss: 0.0221 | Val rms_score: 0.3856
286
+ 2025-09-26 12:02:57,500 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 47/100 | Train Loss: 0.0216 | Val rms_score: 0.3767
287
+ 2025-09-26 12:03:12,184 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 48/100 | Train Loss: 0.0203 | Val rms_score: 0.3805
288
+ 2025-09-26 12:03:26,302 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 49/100 | Train Loss: 0.0190 | Val rms_score: 0.3860
289
+ 2025-09-26 12:03:42,025 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 50/100 | Train Loss: 0.0234 | Val rms_score: 0.3932
290
+ 2025-09-26 12:03:56,175 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 51/100 | Train Loss: 0.0227 | Val rms_score: 0.3761
291
+ 2025-09-26 12:04:12,005 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 52/100 | Train Loss: 0.0180 | Val rms_score: 0.3754
292
+ 2025-09-26 12:04:26,279 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 53/100 | Train Loss: 0.0190 | Val rms_score: 0.3807
293
+ 2025-09-26 12:04:41,951 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 54/100 | Train Loss: 0.0173 | Val rms_score: 0.3796
294
+ 2025-09-26 12:04:57,239 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 55/100 | Train Loss: 0.0195 | Val rms_score: 0.3855
295
+ 2025-09-26 12:05:12,412 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 56/100 | Train Loss: 0.0178 | Val rms_score: 0.3835
296
+ 2025-09-26 12:05:26,842 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 57/100 | Train Loss: 0.0157 | Val rms_score: 0.3778
297
+ 2025-09-26 12:05:42,207 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 58/100 | Train Loss: 0.0165 | Val rms_score: 0.3799
298
+ 2025-09-26 12:05:55,132 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 59/100 | Train Loss: 0.0162 | Val rms_score: 0.3818
299
+ 2025-09-26 12:06:10,356 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 60/100 | Train Loss: 0.0160 | Val rms_score: 0.3861
300
+ 2025-09-26 12:06:24,354 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 61/100 | Train Loss: 0.0164 | Val rms_score: 0.3807
301
+ 2025-09-26 12:06:40,130 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 62/100 | Train Loss: 0.0163 | Val rms_score: 0.3865
302
+ 2025-09-26 12:06:55,148 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 63/100 | Train Loss: 0.0158 | Val rms_score: 0.3823
303
+ 2025-09-26 12:07:10,210 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 64/100 | Train Loss: 0.0212 | Val rms_score: 0.3842
304
+ 2025-09-26 12:07:25,365 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 65/100 | Train Loss: 0.0162 | Val rms_score: 0.3879
305
+ 2025-09-26 12:07:40,097 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 66/100 | Train Loss: 0.0176 | Val rms_score: 0.3812
306
+ 2025-09-26 12:07:55,894 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 67/100 | Train Loss: 0.0153 | Val rms_score: 0.3814
307
+ 2025-09-26 12:08:09,287 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 68/100 | Train Loss: 0.0163 | Val rms_score: 0.3776
308
+ 2025-09-26 12:08:24,346 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 69/100 | Train Loss: 0.0149 | Val rms_score: 0.3855
309
+ 2025-09-26 12:08:38,717 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 70/100 | Train Loss: 0.0145 | Val rms_score: 0.3862
310
+ 2025-09-26 12:08:53,940 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 71/100 | Train Loss: 0.0155 | Val rms_score: 0.3826
311
+ 2025-09-26 12:09:08,649 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 72/100 | Train Loss: 0.0158 | Val rms_score: 0.3819
312
+ 2025-09-26 12:09:25,323 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 73/100 | Train Loss: 0.0178 | Val rms_score: 0.3815
313
+ 2025-09-26 12:09:37,854 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 74/100 | Train Loss: 0.0146 | Val rms_score: 0.3797
314
+ 2025-09-26 12:09:53,249 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 75/100 | Train Loss: 0.0157 | Val rms_score: 0.3901
315
+ 2025-09-26 12:10:07,060 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 76/100 | Train Loss: 0.0155 | Val rms_score: 0.3800
316
+ 2025-09-26 12:10:22,917 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 77/100 | Train Loss: 0.0167 | Val rms_score: 0.3786
317
+ 2025-09-26 12:10:37,343 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 78/100 | Train Loss: 0.0149 | Val rms_score: 0.3909
318
+ 2025-09-26 12:10:53,058 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 79/100 | Train Loss: 0.0136 | Val rms_score: 0.3803
319
+ 2025-09-26 12:11:06,878 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 80/100 | Train Loss: 0.0154 | Val rms_score: 0.3876
320
+ 2025-09-26 12:11:22,823 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 81/100 | Train Loss: 0.0148 | Val rms_score: 0.3839
321
+ 2025-09-26 12:11:37,679 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 82/100 | Train Loss: 0.0115 | Val rms_score: 0.3846
322
+ 2025-09-26 12:11:53,001 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 83/100 | Train Loss: 0.0146 | Val rms_score: 0.3820
323
+ 2025-09-26 12:12:08,376 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 84/100 | Train Loss: 0.0156 | Val rms_score: 0.3785
324
+ 2025-09-26 12:12:23,313 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 85/100 | Train Loss: 0.0138 | Val rms_score: 0.3873
325
+ 2025-09-26 12:12:38,100 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 86/100 | Train Loss: 0.0135 | Val rms_score: 0.3870
326
+ 2025-09-26 12:12:53,551 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 87/100 | Train Loss: 0.0136 | Val rms_score: 0.3802
327
+ 2025-09-26 12:13:08,587 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 88/100 | Train Loss: 0.0141 | Val rms_score: 0.3878
328
+ 2025-09-26 12:13:23,106 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 89/100 | Train Loss: 0.0142 | Val rms_score: 0.3884
329
+ 2025-09-26 12:13:38,502 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 90/100 | Train Loss: 0.0131 | Val rms_score: 0.3766
330
+ 2025-09-26 12:13:54,215 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 91/100 | Train Loss: 0.0128 | Val rms_score: 0.3807
331
+ 2025-09-26 12:14:10,331 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 92/100 | Train Loss: 0.0128 | Val rms_score: 0.3824
332
+ 2025-09-26 12:14:22,503 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 93/100 | Train Loss: 0.0166 | Val rms_score: 0.3825
333
+ 2025-09-26 12:14:38,208 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 94/100 | Train Loss: 0.0132 | Val rms_score: 0.3815
334
+ 2025-09-26 12:14:51,950 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 95/100 | Train Loss: 0.0139 | Val rms_score: 0.3910
335
+ 2025-09-26 12:15:06,628 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 96/100 | Train Loss: 0.0136 | Val rms_score: 0.3870
336
+ 2025-09-26 12:15:21,409 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 97/100 | Train Loss: 0.0136 | Val rms_score: 0.3854
337
+ 2025-09-26 12:15:36,902 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 98/100 | Train Loss: 0.0143 | Val rms_score: 0.3876
338
+ 2025-09-26 12:15:49,823 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 99/100 | Train Loss: 0.0146 | Val rms_score: 0.3891
339
+ 2025-09-26 12:16:05,266 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 100/100 | Train Loss: 0.0139 | Val rms_score: 0.3841
340
+ 2025-09-26 12:16:06,343 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Test rms_score: 0.4682
341
+ 2025-09-26 12:16:06,723 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Final Triplicate Test Results — Avg rms_score: 0.4800, Std Dev: 0.0118
logs_modchembert_regression_ModChemBERT-MLM-DAPT/modchembert_deepchem_splits_run_astrazeneca_cl_epochs100_batch_size32_20250926_121606.log ADDED
@@ -0,0 +1,319 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ 2025-09-26 12:16:06,725 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Running benchmark for dataset: astrazeneca_cl
2
+ 2025-09-26 12:16:06,725 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - dataset: astrazeneca_cl, tasks: ['y'], epochs: 100, learning rate: 3e-05, transform: True
3
+ 2025-09-26 12:16:06,729 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Starting triplicate run 1 for dataset astrazeneca_cl at 2025-09-26_12-16-06
4
+ 2025-09-26 12:16:19,129 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 1/100 | Train Loss: 0.9861 | Val rms_score: 0.5164
5
+ 2025-09-26 12:16:19,130 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Global step of best model: 36
6
+ 2025-09-26 12:16:19,760 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Best model saved at epoch 1 with val rms_score: 0.5164
7
+ 2025-09-26 12:16:33,200 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 2/100 | Train Loss: 0.5972 | Val rms_score: 0.5064
8
+ 2025-09-26 12:16:33,410 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Global step of best model: 72
9
+ 2025-09-26 12:16:33,995 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Best model saved at epoch 2 with val rms_score: 0.5064
10
+ 2025-09-26 12:16:47,831 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 3/100 | Train Loss: 0.5273 | Val rms_score: 0.5306
11
+ 2025-09-26 12:17:00,523 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 4/100 | Train Loss: 0.4722 | Val rms_score: 0.5311
12
+ 2025-09-26 12:17:13,477 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 5/100 | Train Loss: 0.4375 | Val rms_score: 0.5241
13
+ 2025-09-26 12:17:25,159 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 6/100 | Train Loss: 0.3672 | Val rms_score: 0.5310
14
+ 2025-09-26 12:17:38,738 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 7/100 | Train Loss: 0.3264 | Val rms_score: 0.5558
15
+ 2025-09-26 12:17:51,293 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 8/100 | Train Loss: 0.2552 | Val rms_score: 0.5554
16
+ 2025-09-26 12:18:01,079 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 9/100 | Train Loss: 0.2539 | Val rms_score: 0.5333
17
+ 2025-09-26 12:18:14,150 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 10/100 | Train Loss: 0.2118 | Val rms_score: 0.5518
18
+ 2025-09-26 12:18:26,624 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 11/100 | Train Loss: 0.1701 | Val rms_score: 0.5507
19
+ 2025-09-26 12:18:40,014 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 12/100 | Train Loss: 0.1514 | Val rms_score: 0.5456
20
+ 2025-09-26 12:18:53,211 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 13/100 | Train Loss: 0.1328 | Val rms_score: 0.5514
21
+ 2025-09-26 12:19:05,226 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 14/100 | Train Loss: 0.1074 | Val rms_score: 0.5533
22
+ 2025-09-26 12:19:18,364 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 15/100 | Train Loss: 0.1319 | Val rms_score: 0.5597
23
+ 2025-09-26 12:19:30,810 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 16/100 | Train Loss: 0.1137 | Val rms_score: 0.5547
24
+ 2025-09-26 12:19:44,021 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 17/100 | Train Loss: 0.1120 | Val rms_score: 0.5549
25
+ 2025-09-26 12:19:57,015 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 18/100 | Train Loss: 0.0942 | Val rms_score: 0.5497
26
+ 2025-09-26 12:20:07,636 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 19/100 | Train Loss: 0.1016 | Val rms_score: 0.5441
27
+ 2025-09-26 12:20:20,633 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 20/100 | Train Loss: 0.0945 | Val rms_score: 0.5505
28
+ 2025-09-26 12:20:33,673 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 21/100 | Train Loss: 0.0816 | Val rms_score: 0.5506
29
+ 2025-09-26 12:20:46,136 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 22/100 | Train Loss: 0.0729 | Val rms_score: 0.5476
30
+ 2025-09-26 12:20:59,310 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 23/100 | Train Loss: 0.0681 | Val rms_score: 0.5485
31
+ 2025-09-26 12:21:10,631 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 24/100 | Train Loss: 0.0720 | Val rms_score: 0.5465
32
+ 2025-09-26 12:21:23,633 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 25/100 | Train Loss: 0.0707 | Val rms_score: 0.5489
33
+ 2025-09-26 12:21:36,577 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 26/100 | Train Loss: 0.0664 | Val rms_score: 0.5480
34
+ 2025-09-26 12:21:48,842 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 27/100 | Train Loss: 0.0634 | Val rms_score: 0.5371
35
+ 2025-09-26 12:22:03,069 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 28/100 | Train Loss: 0.0737 | Val rms_score: 0.5530
36
+ 2025-09-26 12:22:13,397 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 29/100 | Train Loss: 0.0690 | Val rms_score: 0.5557
37
+ 2025-09-26 12:22:26,475 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 30/100 | Train Loss: 0.0616 | Val rms_score: 0.5417
38
+ 2025-09-26 12:22:39,577 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 31/100 | Train Loss: 0.0552 | Val rms_score: 0.5505
39
+ 2025-09-26 12:22:51,840 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 32/100 | Train Loss: 0.0501 | Val rms_score: 0.5474
40
+ 2025-09-26 12:23:04,405 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 33/100 | Train Loss: 0.0488 | Val rms_score: 0.5444
41
+ 2025-09-26 12:23:15,994 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 34/100 | Train Loss: 0.0475 | Val rms_score: 0.5421
42
+ 2025-09-26 12:23:29,041 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 35/100 | Train Loss: 0.0521 | Val rms_score: 0.5492
43
+ 2025-09-26 12:23:42,237 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 36/100 | Train Loss: 0.0488 | Val rms_score: 0.5402
44
+ 2025-09-26 12:23:53,185 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 37/100 | Train Loss: 0.0464 | Val rms_score: 0.5430
45
+ 2025-09-26 12:24:06,221 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 38/100 | Train Loss: 0.0423 | Val rms_score: 0.5455
46
+ 2025-09-26 12:24:19,146 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 39/100 | Train Loss: 0.0356 | Val rms_score: 0.5437
47
+ 2025-09-26 12:24:31,864 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 40/100 | Train Loss: 0.0421 | Val rms_score: 0.5453
48
+ 2025-09-26 12:24:44,758 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 41/100 | Train Loss: 0.0401 | Val rms_score: 0.5416
49
+ 2025-09-26 12:24:54,936 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 42/100 | Train Loss: 0.0361 | Val rms_score: 0.5469
50
+ 2025-09-26 12:25:08,170 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 43/100 | Train Loss: 0.0434 | Val rms_score: 0.5454
51
+ 2025-09-26 12:25:21,316 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 44/100 | Train Loss: 0.0438 | Val rms_score: 0.5500
52
+ 2025-09-26 12:25:33,385 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 45/100 | Train Loss: 0.0385 | Val rms_score: 0.5468
53
+ 2025-09-26 12:25:46,298 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 46/100 | Train Loss: 0.0373 | Val rms_score: 0.5451
54
+ 2025-09-26 12:25:57,803 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 47/100 | Train Loss: 0.0406 | Val rms_score: 0.5414
55
+ 2025-09-26 12:26:11,090 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 48/100 | Train Loss: 0.0379 | Val rms_score: 0.5447
56
+ 2025-09-26 12:26:24,227 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 49/100 | Train Loss: 0.0362 | Val rms_score: 0.5423
57
+ 2025-09-26 12:26:36,445 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 50/100 | Train Loss: 0.0354 | Val rms_score: 0.5482
58
+ 2025-09-26 12:26:49,617 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 51/100 | Train Loss: 0.0360 | Val rms_score: 0.5306
59
+ 2025-09-26 12:27:01,232 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 52/100 | Train Loss: 0.0375 | Val rms_score: 0.5459
60
+ 2025-09-26 12:27:14,603 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 53/100 | Train Loss: 0.0325 | Val rms_score: 0.5424
61
+ 2025-09-26 12:27:27,865 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 54/100 | Train Loss: 0.0334 | Val rms_score: 0.5384
62
+ 2025-09-26 12:27:40,081 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 55/100 | Train Loss: 0.0334 | Val rms_score: 0.5425
63
+ 2025-09-26 12:27:54,160 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 56/100 | Train Loss: 0.0378 | Val rms_score: 0.5364
64
+ 2025-09-26 12:28:06,270 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 57/100 | Train Loss: 0.0334 | Val rms_score: 0.5376
65
+ 2025-09-26 12:28:18,889 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 58/100 | Train Loss: 0.0375 | Val rms_score: 0.5416
66
+ 2025-09-26 12:28:32,054 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 59/100 | Train Loss: 0.0376 | Val rms_score: 0.5311
67
+ 2025-09-26 12:28:44,185 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 60/100 | Train Loss: 0.0345 | Val rms_score: 0.5371
68
+ 2025-09-26 12:28:57,402 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 61/100 | Train Loss: 0.0328 | Val rms_score: 0.5437
69
+ 2025-09-26 12:29:08,652 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 62/100 | Train Loss: 0.0317 | Val rms_score: 0.5334
70
+ 2025-09-26 12:29:21,838 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 63/100 | Train Loss: 0.0310 | Val rms_score: 0.5357
71
+ 2025-09-26 12:29:34,911 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 64/100 | Train Loss: 0.0282 | Val rms_score: 0.5377
72
+ 2025-09-26 12:29:47,043 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 65/100 | Train Loss: 0.0295 | Val rms_score: 0.5342
73
+ 2025-09-26 12:30:00,114 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 66/100 | Train Loss: 0.0295 | Val rms_score: 0.5365
74
+ 2025-09-26 12:30:11,901 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 67/100 | Train Loss: 0.0327 | Val rms_score: 0.5320
75
+ 2025-09-26 12:30:25,245 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 68/100 | Train Loss: 0.0289 | Val rms_score: 0.5362
76
+ 2025-09-26 12:30:38,477 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 69/100 | Train Loss: 0.0308 | Val rms_score: 0.5372
77
+ 2025-09-26 12:30:50,702 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 70/100 | Train Loss: 0.0254 | Val rms_score: 0.5347
78
+ 2025-09-26 12:31:03,846 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 71/100 | Train Loss: 0.0273 | Val rms_score: 0.5353
79
+ 2025-09-26 12:31:15,934 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 72/100 | Train Loss: 0.0304 | Val rms_score: 0.5316
80
+ 2025-09-26 12:31:29,067 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 73/100 | Train Loss: 0.0308 | Val rms_score: 0.5339
81
+ 2025-09-26 12:31:42,040 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 74/100 | Train Loss: 0.0278 | Val rms_score: 0.5357
82
+ 2025-09-26 12:31:54,337 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 75/100 | Train Loss: 0.0278 | Val rms_score: 0.5365
83
+ 2025-09-26 12:32:07,252 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 76/100 | Train Loss: 0.0280 | Val rms_score: 0.5408
84
+ 2025-09-26 12:32:19,126 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 77/100 | Train Loss: 0.0260 | Val rms_score: 0.5342
85
+ 2025-09-26 12:32:32,390 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 78/100 | Train Loss: 0.0244 | Val rms_score: 0.5309
86
+ 2025-09-26 12:32:45,158 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 79/100 | Train Loss: 0.0295 | Val rms_score: 0.5258
87
+ 2025-09-26 12:32:56,747 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 80/100 | Train Loss: 0.0299 | Val rms_score: 0.5302
88
+ 2025-09-26 12:33:09,732 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 81/100 | Train Loss: 0.0291 | Val rms_score: 0.5300
89
+ 2025-09-26 12:33:22,207 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 82/100 | Train Loss: 0.0262 | Val rms_score: 0.5337
90
+ 2025-09-26 12:33:35,367 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 83/100 | Train Loss: 0.0254 | Val rms_score: 0.5320
91
+ 2025-09-26 12:33:49,356 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 84/100 | Train Loss: 0.0231 | Val rms_score: 0.5328
92
+ 2025-09-26 12:34:01,712 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 85/100 | Train Loss: 0.0282 | Val rms_score: 0.5323
93
+ 2025-09-26 12:34:14,707 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 86/100 | Train Loss: 0.0252 | Val rms_score: 0.5375
94
+ 2025-09-26 12:34:26,998 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 87/100 | Train Loss: 0.0244 | Val rms_score: 0.5317
95
+ 2025-09-26 12:34:39,974 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 88/100 | Train Loss: 0.0299 | Val rms_score: 0.5288
96
+ 2025-09-26 12:34:52,643 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 89/100 | Train Loss: 0.0205 | Val rms_score: 0.5273
97
+ 2025-09-26 12:35:04,722 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 90/100 | Train Loss: 0.0257 | Val rms_score: 0.5287
98
+ 2025-09-26 12:35:17,802 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 91/100 | Train Loss: 0.0264 | Val rms_score: 0.5295
99
+ 2025-09-26 12:35:28,740 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 92/100 | Train Loss: 0.0223 | Val rms_score: 0.5310
100
+ 2025-09-26 12:35:41,696 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 93/100 | Train Loss: 0.0241 | Val rms_score: 0.5287
101
+ 2025-09-26 12:35:54,709 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 94/100 | Train Loss: 0.0235 | Val rms_score: 0.5352
102
+ 2025-09-26 12:36:06,791 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 95/100 | Train Loss: 0.0270 | Val rms_score: 0.5332
103
+ 2025-09-26 12:36:19,811 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 96/100 | Train Loss: 0.0245 | Val rms_score: 0.5282
104
+ 2025-09-26 12:36:31,537 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 97/100 | Train Loss: 0.0244 | Val rms_score: 0.5276
105
+ 2025-09-26 12:36:44,674 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 98/100 | Train Loss: 0.0229 | Val rms_score: 0.5308
106
+ 2025-09-26 12:36:57,381 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 99/100 | Train Loss: 0.0221 | Val rms_score: 0.5311
107
+ 2025-09-26 12:37:09,736 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 100/100 | Train Loss: 0.0232 | Val rms_score: 0.5280
108
+ 2025-09-26 12:37:10,718 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Test rms_score: 0.5379
109
+ 2025-09-26 12:37:11,124 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Starting triplicate run 2 for dataset astrazeneca_cl at 2025-09-26_12-37-11
110
+ 2025-09-26 12:37:23,527 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 1/100 | Train Loss: 0.8576 | Val rms_score: 0.4930
111
+ 2025-09-26 12:37:23,527 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Global step of best model: 36
112
+ 2025-09-26 12:37:24,082 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Best model saved at epoch 1 with val rms_score: 0.4930
113
+ 2025-09-26 12:37:36,025 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 2/100 | Train Loss: 0.6007 | Val rms_score: 0.5166
114
+ 2025-09-26 12:37:49,223 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 3/100 | Train Loss: 0.4883 | Val rms_score: 0.4994
115
+ 2025-09-26 12:38:02,028 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 4/100 | Train Loss: 0.4219 | Val rms_score: 0.5000
116
+ 2025-09-26 12:38:14,084 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 5/100 | Train Loss: 0.3889 | Val rms_score: 0.5245
117
+ 2025-09-26 12:38:26,627 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 6/100 | Train Loss: 0.3301 | Val rms_score: 0.5065
118
+ 2025-09-26 12:38:39,196 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 7/100 | Train Loss: 0.2674 | Val rms_score: 0.5209
119
+ 2025-09-26 12:38:51,736 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 8/100 | Train Loss: 0.2153 | Val rms_score: 0.5270
120
+ 2025-09-26 12:39:04,308 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 9/100 | Train Loss: 0.1862 | Val rms_score: 0.5316
121
+ 2025-09-26 12:39:15,590 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 10/100 | Train Loss: 0.1745 | Val rms_score: 0.5344
122
+ 2025-09-26 12:39:28,733 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 11/100 | Train Loss: 0.1510 | Val rms_score: 0.5189
123
+ 2025-09-26 12:39:41,367 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 12/100 | Train Loss: 0.1201 | Val rms_score: 0.5353
124
+ 2025-09-26 12:39:54,521 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 13/100 | Train Loss: 0.1111 | Val rms_score: 0.5424
125
+ 2025-09-26 12:40:07,543 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 14/100 | Train Loss: 0.1211 | Val rms_score: 0.5310
126
+ 2025-09-26 12:40:19,861 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 15/100 | Train Loss: 0.0942 | Val rms_score: 0.5368
127
+ 2025-09-26 12:40:33,027 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 16/100 | Train Loss: 0.0894 | Val rms_score: 0.5253
128
+ 2025-09-26 12:40:45,058 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 17/100 | Train Loss: 0.0905 | Val rms_score: 0.5539
129
+ 2025-09-26 12:40:58,089 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 18/100 | Train Loss: 0.0907 | Val rms_score: 0.5283
130
+ 2025-09-26 12:41:11,381 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 19/100 | Train Loss: 0.0807 | Val rms_score: 0.5321
131
+ 2025-09-26 12:41:22,451 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 20/100 | Train Loss: 0.0840 | Val rms_score: 0.5325
132
+ 2025-09-26 12:41:35,559 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 21/100 | Train Loss: 0.0720 | Val rms_score: 0.5416
133
+ 2025-09-26 12:41:48,792 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 22/100 | Train Loss: 0.0734 | Val rms_score: 0.5455
134
+ 2025-09-26 12:42:00,851 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 23/100 | Train Loss: 0.0709 | Val rms_score: 0.5255
135
+ 2025-09-26 12:42:14,156 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 24/100 | Train Loss: 0.0725 | Val rms_score: 0.5410
136
+ 2025-09-26 12:42:25,726 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 25/100 | Train Loss: 0.0673 | Val rms_score: 0.5338
137
+ 2025-09-26 12:42:38,819 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 26/100 | Train Loss: 0.0725 | Val rms_score: 0.5336
138
+ 2025-09-26 12:42:52,103 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 27/100 | Train Loss: 0.0699 | Val rms_score: 0.5331
139
+ 2025-09-26 12:43:04,699 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 28/100 | Train Loss: 0.0723 | Val rms_score: 0.5312
140
+ 2025-09-26 12:43:17,238 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 29/100 | Train Loss: 0.0484 | Val rms_score: 0.5305
141
+ 2025-09-26 12:43:29,517 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 30/100 | Train Loss: 0.0484 | Val rms_score: 0.5349
142
+ 2025-09-26 12:43:42,762 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 31/100 | Train Loss: 0.0498 | Val rms_score: 0.5293
143
+ 2025-09-26 12:43:55,627 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 32/100 | Train Loss: 0.0497 | Val rms_score: 0.5369
144
+ 2025-09-26 12:44:07,738 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 33/100 | Train Loss: 0.0460 | Val rms_score: 0.5241
145
+ 2025-09-26 12:44:20,506 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 34/100 | Train Loss: 0.0436 | Val rms_score: 0.5312
146
+ 2025-09-26 12:44:32,781 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 35/100 | Train Loss: 0.0488 | Val rms_score: 0.5278
147
+ 2025-09-26 12:44:45,517 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 36/100 | Train Loss: 0.0432 | Val rms_score: 0.5329
148
+ 2025-09-26 12:44:58,767 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 37/100 | Train Loss: 0.0413 | Val rms_score: 0.5303
149
+ 2025-09-26 12:45:10,622 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 38/100 | Train Loss: 0.0397 | Val rms_score: 0.5288
150
+ 2025-09-26 12:45:23,869 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 39/100 | Train Loss: 0.0378 | Val rms_score: 0.5344
151
+ 2025-09-26 12:45:36,156 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 40/100 | Train Loss: 0.0375 | Val rms_score: 0.5275
152
+ 2025-09-26 12:45:49,239 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 41/100 | Train Loss: 0.0393 | Val rms_score: 0.5223
153
+ 2025-09-26 12:46:01,954 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 42/100 | Train Loss: 0.0386 | Val rms_score: 0.5352
154
+ 2025-09-26 12:46:12,938 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 43/100 | Train Loss: 0.0373 | Val rms_score: 0.5298
155
+ 2025-09-26 12:46:25,847 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 44/100 | Train Loss: 0.0360 | Val rms_score: 0.5309
156
+ 2025-09-26 12:46:38,060 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 45/100 | Train Loss: 0.0381 | Val rms_score: 0.5381
157
+ 2025-09-26 12:46:51,045 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 46/100 | Train Loss: 0.0369 | Val rms_score: 0.5255
158
+ 2025-09-26 12:47:04,283 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 47/100 | Train Loss: 0.0365 | Val rms_score: 0.5271
159
+ 2025-09-26 12:47:15,045 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 48/100 | Train Loss: 0.0360 | Val rms_score: 0.5326
160
+ 2025-09-26 12:47:28,086 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 49/100 | Train Loss: 0.0334 | Val rms_score: 0.5218
161
+ 2025-09-26 12:47:40,710 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 50/100 | Train Loss: 0.0297 | Val rms_score: 0.5238
162
+ 2025-09-26 12:47:53,559 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 51/100 | Train Loss: 0.0330 | Val rms_score: 0.5292
163
+ 2025-09-26 12:48:06,537 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 52/100 | Train Loss: 0.0358 | Val rms_score: 0.5298
164
+ 2025-09-26 12:48:16,804 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 53/100 | Train Loss: 0.0289 | Val rms_score: 0.5280
165
+ 2025-09-26 12:48:29,297 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 54/100 | Train Loss: 0.0317 | Val rms_score: 0.5284
166
+ 2025-09-26 12:48:42,023 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 55/100 | Train Loss: 0.0304 | Val rms_score: 0.5249
167
+ 2025-09-26 12:48:53,401 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 56/100 | Train Loss: 0.0332 | Val rms_score: 0.5343
168
+ 2025-09-26 12:49:06,451 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 57/100 | Train Loss: 0.0289 | Val rms_score: 0.5268
169
+ 2025-09-26 12:49:18,589 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 58/100 | Train Loss: 0.0284 | Val rms_score: 0.5368
170
+ 2025-09-26 12:49:31,916 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 59/100 | Train Loss: 0.0301 | Val rms_score: 0.5302
171
+ 2025-09-26 12:49:45,114 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 60/100 | Train Loss: 0.0280 | Val rms_score: 0.5298
172
+ 2025-09-26 12:49:56,635 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 61/100 | Train Loss: 0.0271 | Val rms_score: 0.5266
173
+ 2025-09-26 12:50:09,807 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 62/100 | Train Loss: 0.0277 | Val rms_score: 0.5261
174
+ 2025-09-26 12:50:21,608 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 63/100 | Train Loss: 0.0310 | Val rms_score: 0.5295
175
+ 2025-09-26 12:50:34,527 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 64/100 | Train Loss: 0.0396 | Val rms_score: 0.5255
176
+ 2025-09-26 12:50:47,752 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 65/100 | Train Loss: 0.0297 | Val rms_score: 0.5216
177
+ 2025-09-26 12:50:59,039 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 66/100 | Train Loss: 0.0280 | Val rms_score: 0.5244
178
+ 2025-09-26 12:51:10,826 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 67/100 | Train Loss: 0.0282 | Val rms_score: 0.5244
179
+ 2025-09-26 12:51:23,133 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 68/100 | Train Loss: 0.0284 | Val rms_score: 0.5254
180
+ 2025-09-26 12:51:36,319 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 69/100 | Train Loss: 0.0282 | Val rms_score: 0.5243
181
+ 2025-09-26 12:51:48,500 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 70/100 | Train Loss: 0.0270 | Val rms_score: 0.5208
182
+ 2025-09-26 12:51:59,425 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 71/100 | Train Loss: 0.0255 | Val rms_score: 0.5250
183
+ 2025-09-26 12:52:11,796 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 72/100 | Train Loss: 0.0263 | Val rms_score: 0.5271
184
+ 2025-09-26 12:52:24,386 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 73/100 | Train Loss: 0.0240 | Val rms_score: 0.5241
185
+ 2025-09-26 12:52:37,690 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 74/100 | Train Loss: 0.0238 | Val rms_score: 0.5224
186
+ 2025-09-26 12:52:49,926 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 75/100 | Train Loss: 0.0237 | Val rms_score: 0.5262
187
+ 2025-09-26 12:53:01,103 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 76/100 | Train Loss: 0.0242 | Val rms_score: 0.5239
188
+ 2025-09-26 12:53:13,399 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 77/100 | Train Loss: 0.0250 | Val rms_score: 0.5211
189
+ 2025-09-26 12:53:25,941 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 78/100 | Train Loss: 0.0208 | Val rms_score: 0.5247
190
+ 2025-09-26 12:53:39,064 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 79/100 | Train Loss: 0.0230 | Val rms_score: 0.5259
191
+ 2025-09-26 12:53:50,630 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 80/100 | Train Loss: 0.0234 | Val rms_score: 0.5217
192
+ 2025-09-26 12:54:02,275 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 81/100 | Train Loss: 0.0225 | Val rms_score: 0.5204
193
+ 2025-09-26 12:54:14,886 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 82/100 | Train Loss: 0.0238 | Val rms_score: 0.5262
194
+ 2025-09-26 12:54:27,517 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 83/100 | Train Loss: 0.0227 | Val rms_score: 0.5209
195
+ 2025-09-26 12:54:41,331 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 84/100 | Train Loss: 0.0220 | Val rms_score: 0.5235
196
+ 2025-09-26 12:54:54,233 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 85/100 | Train Loss: 0.0230 | Val rms_score: 0.5243
197
+ 2025-09-26 12:55:04,642 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 86/100 | Train Loss: 0.0218 | Val rms_score: 0.5235
198
+ 2025-09-26 12:55:17,373 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 87/100 | Train Loss: 0.0204 | Val rms_score: 0.5264
199
+ 2025-09-26 12:55:29,759 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 88/100 | Train Loss: 0.0216 | Val rms_score: 0.5191
200
+ 2025-09-26 12:55:42,110 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 89/100 | Train Loss: 0.0214 | Val rms_score: 0.5290
201
+ 2025-09-26 12:55:55,215 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 90/100 | Train Loss: 0.0222 | Val rms_score: 0.5267
202
+ 2025-09-26 12:56:07,392 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 91/100 | Train Loss: 0.0244 | Val rms_score: 0.5283
203
+ 2025-09-26 12:56:18,238 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 92/100 | Train Loss: 0.0246 | Val rms_score: 0.5216
204
+ 2025-09-26 12:56:30,555 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 93/100 | Train Loss: 0.0213 | Val rms_score: 0.5233
205
+ 2025-09-26 12:56:43,034 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 94/100 | Train Loss: 0.0208 | Val rms_score: 0.5211
206
+ 2025-09-26 12:56:56,012 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 95/100 | Train Loss: 0.0235 | Val rms_score: 0.5243
207
+ 2025-09-26 12:57:05,695 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 96/100 | Train Loss: 0.0181 | Val rms_score: 0.5245
208
+ 2025-09-26 12:57:17,900 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 97/100 | Train Loss: 0.0205 | Val rms_score: 0.5223
209
+ 2025-09-26 12:57:30,694 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 98/100 | Train Loss: 0.0201 | Val rms_score: 0.5225
210
+ 2025-09-26 12:57:43,917 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 99/100 | Train Loss: 0.0181 | Val rms_score: 0.5212
211
+ 2025-09-26 12:57:56,314 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 100/100 | Train Loss: 0.0190 | Val rms_score: 0.5232
212
+ 2025-09-26 12:57:57,240 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Test rms_score: 0.5301
213
+ 2025-09-26 12:57:57,695 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Starting triplicate run 3 for dataset astrazeneca_cl at 2025-09-26_12-57-57
214
+ 2025-09-26 12:58:07,771 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 1/100 | Train Loss: 0.8889 | Val rms_score: 0.5108
215
+ 2025-09-26 12:58:07,771 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Global step of best model: 36
216
+ 2025-09-26 12:58:08,325 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Best model saved at epoch 1 with val rms_score: 0.5108
217
+ 2025-09-26 12:58:20,316 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 2/100 | Train Loss: 0.5972 | Val rms_score: 0.4995
218
+ 2025-09-26 12:58:20,489 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Global step of best model: 72
219
+ 2025-09-26 12:58:21,054 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Best model saved at epoch 2 with val rms_score: 0.4995
220
+ 2025-09-26 12:58:33,401 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 3/100 | Train Loss: 0.5938 | Val rms_score: 0.5165
221
+ 2025-09-26 12:58:46,185 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 4/100 | Train Loss: 0.4688 | Val rms_score: 0.5039
222
+ 2025-09-26 12:58:58,831 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 5/100 | Train Loss: 0.3872 | Val rms_score: 0.5080
223
+ 2025-09-26 12:59:09,044 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 6/100 | Train Loss: 0.3320 | Val rms_score: 0.5336
224
+ 2025-09-26 12:59:20,966 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 7/100 | Train Loss: 0.2812 | Val rms_score: 0.5232
225
+ 2025-09-26 12:59:33,400 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 8/100 | Train Loss: 0.2552 | Val rms_score: 0.5319
226
+ 2025-09-26 12:59:46,548 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 9/100 | Train Loss: 0.2214 | Val rms_score: 0.5195
227
+ 2025-09-26 12:59:59,506 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 10/100 | Train Loss: 0.1918 | Val rms_score: 0.5167
228
+ 2025-09-26 13:00:10,713 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 11/100 | Train Loss: 0.1667 | Val rms_score: 0.5184
229
+ 2025-09-26 13:00:22,586 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 12/100 | Train Loss: 0.1475 | Val rms_score: 0.5084
230
+ 2025-09-26 13:00:35,426 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 13/100 | Train Loss: 0.1285 | Val rms_score: 0.5211
231
+ 2025-09-26 13:00:48,384 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 14/100 | Train Loss: 0.0977 | Val rms_score: 0.5173
232
+ 2025-09-26 13:01:01,288 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 15/100 | Train Loss: 0.1146 | Val rms_score: 0.5174
233
+ 2025-09-26 13:01:12,487 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 16/100 | Train Loss: 0.1089 | Val rms_score: 0.5128
234
+ 2025-09-26 13:01:23,481 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 17/100 | Train Loss: 0.1165 | Val rms_score: 0.5075
235
+ 2025-09-26 13:01:36,053 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 18/100 | Train Loss: 0.0994 | Val rms_score: 0.5121
236
+ 2025-09-26 13:01:49,080 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 19/100 | Train Loss: 0.0898 | Val rms_score: 0.5131
237
+ 2025-09-26 13:02:01,949 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 20/100 | Train Loss: 0.0902 | Val rms_score: 0.5070
238
+ 2025-09-26 13:02:13,318 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 21/100 | Train Loss: 0.0812 | Val rms_score: 0.5105
239
+ 2025-09-26 13:02:25,852 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 22/100 | Train Loss: 0.0694 | Val rms_score: 0.5118
240
+ 2025-09-26 13:02:38,267 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 23/100 | Train Loss: 0.0787 | Val rms_score: 0.5140
241
+ 2025-09-26 13:02:50,989 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 24/100 | Train Loss: 0.0686 | Val rms_score: 0.5140
242
+ 2025-09-26 13:03:02,821 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 25/100 | Train Loss: 0.0699 | Val rms_score: 0.5213
243
+ 2025-09-26 13:03:14,892 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 26/100 | Train Loss: 0.0642 | Val rms_score: 0.5169
244
+ 2025-09-26 13:03:27,484 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 27/100 | Train Loss: 0.0564 | Val rms_score: 0.5197
245
+ 2025-09-26 13:03:40,930 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 28/100 | Train Loss: 0.0674 | Val rms_score: 0.5189
246
+ 2025-09-26 13:03:53,508 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 29/100 | Train Loss: 0.0564 | Val rms_score: 0.5114
247
+ 2025-09-26 13:04:06,626 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 30/100 | Train Loss: 0.0573 | Val rms_score: 0.5179
248
+ 2025-09-26 13:04:20,313 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 31/100 | Train Loss: 0.0566 | Val rms_score: 0.5202
249
+ 2025-09-26 13:04:34,115 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 32/100 | Train Loss: 0.0577 | Val rms_score: 0.5258
250
+ 2025-09-26 13:04:46,512 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 33/100 | Train Loss: 0.0540 | Val rms_score: 0.5067
251
+ 2025-09-26 13:04:59,281 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 34/100 | Train Loss: 0.0563 | Val rms_score: 0.5157
252
+ 2025-09-26 13:05:11,832 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 35/100 | Train Loss: 0.0508 | Val rms_score: 0.5230
253
+ 2025-09-26 13:05:24,327 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 36/100 | Train Loss: 0.0499 | Val rms_score: 0.5170
254
+ 2025-09-26 13:05:37,286 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 37/100 | Train Loss: 0.0530 | Val rms_score: 0.5130
255
+ 2025-09-26 13:05:49,954 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 38/100 | Train Loss: 0.0454 | Val rms_score: 0.5122
256
+ 2025-09-26 13:06:02,218 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 39/100 | Train Loss: 0.0376 | Val rms_score: 0.5179
257
+ 2025-09-26 13:06:15,803 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 40/100 | Train Loss: 0.0458 | Val rms_score: 0.5128
258
+ 2025-09-26 13:06:28,320 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 41/100 | Train Loss: 0.0467 | Val rms_score: 0.5122
259
+ 2025-09-26 13:06:40,956 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 42/100 | Train Loss: 0.0459 | Val rms_score: 0.5176
260
+ 2025-09-26 13:06:53,285 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 43/100 | Train Loss: 0.0469 | Val rms_score: 0.5098
261
+ 2025-09-26 13:07:04,806 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 44/100 | Train Loss: 0.0423 | Val rms_score: 0.5091
262
+ 2025-09-26 13:07:17,408 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 45/100 | Train Loss: 0.0457 | Val rms_score: 0.5073
263
+ 2025-09-26 13:07:30,372 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 46/100 | Train Loss: 0.0360 | Val rms_score: 0.5156
264
+ 2025-09-26 13:07:43,274 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 47/100 | Train Loss: 0.0369 | Val rms_score: 0.5094
265
+ 2025-09-26 13:07:53,889 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 48/100 | Train Loss: 0.0366 | Val rms_score: 0.5102
266
+ 2025-09-26 13:08:06,287 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 49/100 | Train Loss: 0.0360 | Val rms_score: 0.5147
267
+ 2025-09-26 13:08:19,267 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 50/100 | Train Loss: 0.0369 | Val rms_score: 0.5107
268
+ 2025-09-26 13:08:32,152 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 51/100 | Train Loss: 0.0367 | Val rms_score: 0.5075
269
+ 2025-09-26 13:08:44,712 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 52/100 | Train Loss: 0.0373 | Val rms_score: 0.5040
270
+ 2025-09-26 13:08:54,803 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 53/100 | Train Loss: 0.0332 | Val rms_score: 0.5142
271
+ 2025-09-26 13:09:07,353 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 54/100 | Train Loss: 0.0317 | Val rms_score: 0.5114
272
+ 2025-09-26 13:09:20,515 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 55/100 | Train Loss: 0.0323 | Val rms_score: 0.5150
273
+ 2025-09-26 13:09:34,042 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 56/100 | Train Loss: 0.0339 | Val rms_score: 0.5160
274
+ 2025-09-26 13:09:46,348 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 57/100 | Train Loss: 0.0304 | Val rms_score: 0.5126
275
+ 2025-09-26 13:09:57,812 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 58/100 | Train Loss: 0.0319 | Val rms_score: 0.5063
276
+ 2025-09-26 13:10:10,007 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 59/100 | Train Loss: 0.0314 | Val rms_score: 0.5112
277
+ 2025-09-26 13:10:22,765 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 60/100 | Train Loss: 0.0310 | Val rms_score: 0.5102
278
+ 2025-09-26 13:10:35,695 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 61/100 | Train Loss: 0.0339 | Val rms_score: 0.5096
279
+ 2025-09-26 13:10:48,850 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 62/100 | Train Loss: 0.0332 | Val rms_score: 0.5128
280
+ 2025-09-26 13:10:59,017 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 63/100 | Train Loss: 0.0317 | Val rms_score: 0.5056
281
+ 2025-09-26 13:11:11,344 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 64/100 | Train Loss: 0.0330 | Val rms_score: 0.5123
282
+ 2025-09-26 13:11:23,976 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 65/100 | Train Loss: 0.0304 | Val rms_score: 0.5070
283
+ 2025-09-26 13:11:36,962 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 66/100 | Train Loss: 0.0323 | Val rms_score: 0.5088
284
+ 2025-09-26 13:11:48,996 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 67/100 | Train Loss: 0.0304 | Val rms_score: 0.5094
285
+ 2025-09-26 13:12:00,533 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 68/100 | Train Loss: 0.0295 | Val rms_score: 0.5093
286
+ 2025-09-26 13:12:12,948 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 69/100 | Train Loss: 0.0293 | Val rms_score: 0.5106
287
+ 2025-09-26 13:12:25,877 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 70/100 | Train Loss: 0.0268 | Val rms_score: 0.5142
288
+ 2025-09-26 13:12:38,553 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 71/100 | Train Loss: 0.0268 | Val rms_score: 0.5157
289
+ 2025-09-26 13:12:50,230 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 72/100 | Train Loss: 0.0291 | Val rms_score: 0.5101
290
+ 2025-09-26 13:13:02,695 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 73/100 | Train Loss: 0.0279 | Val rms_score: 0.5115
291
+ 2025-09-26 13:13:15,125 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 74/100 | Train Loss: 0.0272 | Val rms_score: 0.5118
292
+ 2025-09-26 13:13:27,602 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 75/100 | Train Loss: 0.0256 | Val rms_score: 0.5105
293
+ 2025-09-26 13:13:40,366 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 76/100 | Train Loss: 0.0269 | Val rms_score: 0.5130
294
+ 2025-09-26 13:13:53,833 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 77/100 | Train Loss: 0.0251 | Val rms_score: 0.5116
295
+ 2025-09-26 13:14:04,921 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 78/100 | Train Loss: 0.0264 | Val rms_score: 0.5123
296
+ 2025-09-26 13:14:17,284 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 79/100 | Train Loss: 0.0277 | Val rms_score: 0.5168
297
+ 2025-09-26 13:14:29,486 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 80/100 | Train Loss: 0.0262 | Val rms_score: 0.5162
298
+ 2025-09-26 13:14:41,926 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 81/100 | Train Loss: 0.0284 | Val rms_score: 0.5097
299
+ 2025-09-26 13:14:55,273 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 82/100 | Train Loss: 0.0245 | Val rms_score: 0.5081
300
+ 2025-09-26 13:15:07,461 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 83/100 | Train Loss: 0.0262 | Val rms_score: 0.5091
301
+ 2025-09-26 13:15:20,313 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 84/100 | Train Loss: 0.0243 | Val rms_score: 0.5099
302
+ 2025-09-26 13:15:33,051 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 85/100 | Train Loss: 0.0250 | Val rms_score: 0.5130
303
+ 2025-09-26 13:15:45,494 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 86/100 | Train Loss: 0.0233 | Val rms_score: 0.5093
304
+ 2025-09-26 13:15:59,045 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 87/100 | Train Loss: 0.0254 | Val rms_score: 0.5064
305
+ 2025-09-26 13:16:09,763 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 88/100 | Train Loss: 0.0240 | Val rms_score: 0.5076
306
+ 2025-09-26 13:16:22,570 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 89/100 | Train Loss: 0.0219 | Val rms_score: 0.5091
307
+ 2025-09-26 13:16:35,460 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 90/100 | Train Loss: 0.0248 | Val rms_score: 0.5114
308
+ 2025-09-26 13:16:45,901 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 91/100 | Train Loss: 0.0250 | Val rms_score: 0.5100
309
+ 2025-09-26 13:16:58,328 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 92/100 | Train Loss: 0.0229 | Val rms_score: 0.5106
310
+ 2025-09-26 13:17:10,438 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 93/100 | Train Loss: 0.0246 | Val rms_score: 0.5091
311
+ 2025-09-26 13:17:23,421 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 94/100 | Train Loss: 0.0218 | Val rms_score: 0.5075
312
+ 2025-09-26 13:17:35,360 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 95/100 | Train Loss: 0.0246 | Val rms_score: 0.5105
313
+ 2025-09-26 13:17:46,261 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 96/100 | Train Loss: 0.0195 | Val rms_score: 0.5115
314
+ 2025-09-26 13:17:58,979 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 97/100 | Train Loss: 0.0215 | Val rms_score: 0.5089
315
+ 2025-09-26 13:18:11,562 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 98/100 | Train Loss: 0.0208 | Val rms_score: 0.5087
316
+ 2025-09-26 13:18:24,445 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 99/100 | Train Loss: 0.0221 | Val rms_score: 0.5091
317
+ 2025-09-26 13:18:36,694 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 100/100 | Train Loss: 0.0215 | Val rms_score: 0.5094
318
+ 2025-09-26 13:18:37,677 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Test rms_score: 0.5373
319
+ 2025-09-26 13:18:38,072 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Final Triplicate Test Results — Avg rms_score: 0.5351, Std Dev: 0.0036
logs_modchembert_regression_ModChemBERT-MLM-DAPT/modchembert_deepchem_splits_run_astrazeneca_logd74_epochs100_batch_size32_20250926_131838.log ADDED
@@ -0,0 +1,411 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ 2025-09-26 13:18:38,074 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Running benchmark for dataset: astrazeneca_logd74
2
+ 2025-09-26 13:18:38,074 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - dataset: astrazeneca_logd74, tasks: ['y'], epochs: 100, learning rate: 3e-05, transform: True
3
+ 2025-09-26 13:18:38,078 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Starting triplicate run 1 for dataset astrazeneca_logd74 at 2025-09-26_13-18-38
4
+ 2025-09-26 13:19:07,271 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 1/100 | Train Loss: 0.4594 | Val rms_score: 0.8573
5
+ 2025-09-26 13:19:07,271 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Global step of best model: 105
6
+ 2025-09-26 13:19:07,834 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Best model saved at epoch 1 with val rms_score: 0.8573
7
+ 2025-09-26 13:19:40,551 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 2/100 | Train Loss: 0.2922 | Val rms_score: 0.8266
8
+ 2025-09-26 13:19:40,701 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Global step of best model: 210
9
+ 2025-09-26 13:19:41,263 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Best model saved at epoch 2 with val rms_score: 0.8266
10
+ 2025-09-26 13:20:15,212 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 3/100 | Train Loss: 0.2083 | Val rms_score: 0.8089
11
+ 2025-09-26 13:20:15,400 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Global step of best model: 315
12
+ 2025-09-26 13:20:15,992 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Best model saved at epoch 3 with val rms_score: 0.8089
13
+ 2025-09-26 13:20:47,959 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 4/100 | Train Loss: 0.1625 | Val rms_score: 0.8035
14
+ 2025-09-26 13:20:48,162 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Global step of best model: 420
15
+ 2025-09-26 13:20:48,997 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Best model saved at epoch 4 with val rms_score: 0.8035
16
+ 2025-09-26 13:21:22,005 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 5/100 | Train Loss: 0.1531 | Val rms_score: 0.8059
17
+ 2025-09-26 13:21:54,566 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 6/100 | Train Loss: 0.1344 | Val rms_score: 0.8383
18
+ 2025-09-26 13:22:28,116 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 7/100 | Train Loss: 0.1129 | Val rms_score: 0.8182
19
+ 2025-09-26 13:23:00,830 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 8/100 | Train Loss: 0.1031 | Val rms_score: 0.8006
20
+ 2025-09-26 13:23:01,001 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Global step of best model: 840
21
+ 2025-09-26 13:23:01,600 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Best model saved at epoch 8 with val rms_score: 0.8006
22
+ 2025-09-26 13:23:35,456 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 9/100 | Train Loss: 0.0931 | Val rms_score: 0.8033
23
+ 2025-09-26 13:24:09,698 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 10/100 | Train Loss: 0.0753 | Val rms_score: 0.7741
24
+ 2025-09-26 13:24:09,857 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Global step of best model: 1050
25
+ 2025-09-26 13:24:10,426 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Best model saved at epoch 10 with val rms_score: 0.7741
26
+ 2025-09-26 13:24:43,057 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 11/100 | Train Loss: 0.0773 | Val rms_score: 0.7921
27
+ 2025-09-26 13:25:15,898 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 12/100 | Train Loss: 0.0719 | Val rms_score: 0.7953
28
+ 2025-09-26 13:25:48,688 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 13/100 | Train Loss: 0.0599 | Val rms_score: 0.7981
29
+ 2025-09-26 13:26:22,054 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 14/100 | Train Loss: 0.0549 | Val rms_score: 0.7808
30
+ 2025-09-26 13:26:55,493 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 15/100 | Train Loss: 0.0542 | Val rms_score: 0.7909
31
+ 2025-09-26 13:27:29,192 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 16/100 | Train Loss: 0.0508 | Val rms_score: 0.7699
32
+ 2025-09-26 13:27:29,809 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Global step of best model: 1680
33
+ 2025-09-26 13:27:30,405 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Best model saved at epoch 16 with val rms_score: 0.7699
34
+ 2025-09-26 13:28:03,037 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 17/100 | Train Loss: 0.0478 | Val rms_score: 0.7783
35
+ 2025-09-26 13:28:35,739 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 18/100 | Train Loss: 0.0469 | Val rms_score: 0.7807
36
+ 2025-09-26 13:29:09,002 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 19/100 | Train Loss: 0.0441 | Val rms_score: 0.7802
37
+ 2025-09-26 13:29:42,442 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 20/100 | Train Loss: 0.0419 | Val rms_score: 0.7698
38
+ 2025-09-26 13:29:42,601 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Global step of best model: 2100
39
+ 2025-09-26 13:29:43,181 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Best model saved at epoch 20 with val rms_score: 0.7698
40
+ 2025-09-26 13:30:15,423 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 21/100 | Train Loss: 0.0402 | Val rms_score: 0.7878
41
+ 2025-09-26 13:30:47,885 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 22/100 | Train Loss: 0.0480 | Val rms_score: 0.7870
42
+ 2025-09-26 13:31:19,594 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 23/100 | Train Loss: 0.0354 | Val rms_score: 0.7780
43
+ 2025-09-26 13:31:52,996 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 24/100 | Train Loss: 0.0389 | Val rms_score: 0.7745
44
+ 2025-09-26 13:32:26,153 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 25/100 | Train Loss: 0.0375 | Val rms_score: 0.7725
45
+ 2025-09-26 13:32:59,388 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 26/100 | Train Loss: 0.0346 | Val rms_score: 0.7633
46
+ 2025-09-26 13:32:59,946 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Global step of best model: 2730
47
+ 2025-09-26 13:33:00,694 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Best model saved at epoch 26 with val rms_score: 0.7633
48
+ 2025-09-26 13:33:33,683 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 27/100 | Train Loss: 0.0353 | Val rms_score: 0.7636
49
+ 2025-09-26 13:34:06,172 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 28/100 | Train Loss: 0.0311 | Val rms_score: 0.7767
50
+ 2025-09-26 13:34:40,404 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 29/100 | Train Loss: 0.0319 | Val rms_score: 0.7678
51
+ 2025-09-26 13:35:13,533 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 30/100 | Train Loss: 0.0331 | Val rms_score: 0.7727
52
+ 2025-09-26 13:35:47,284 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 31/100 | Train Loss: 0.0307 | Val rms_score: 0.7625
53
+ 2025-09-26 13:35:47,832 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Global step of best model: 3255
54
+ 2025-09-26 13:35:48,440 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Best model saved at epoch 31 with val rms_score: 0.7625
55
+ 2025-09-26 13:36:22,070 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 32/100 | Train Loss: 0.0296 | Val rms_score: 0.7706
56
+ 2025-09-26 13:36:56,081 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 33/100 | Train Loss: 0.0297 | Val rms_score: 0.7555
57
+ 2025-09-26 13:36:56,241 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Global step of best model: 3465
58
+ 2025-09-26 13:36:56,801 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Best model saved at epoch 33 with val rms_score: 0.7555
59
+ 2025-09-26 13:37:30,126 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 34/100 | Train Loss: 0.0292 | Val rms_score: 0.7665
60
+ 2025-09-26 13:38:03,661 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 35/100 | Train Loss: 0.0253 | Val rms_score: 0.7606
61
+ 2025-09-26 13:38:37,306 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 36/100 | Train Loss: 0.0266 | Val rms_score: 0.7548
62
+ 2025-09-26 13:38:37,833 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Global step of best model: 3780
63
+ 2025-09-26 13:38:38,399 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Best model saved at epoch 36 with val rms_score: 0.7548
64
+ 2025-09-26 13:39:10,623 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 37/100 | Train Loss: 0.0278 | Val rms_score: 0.7723
65
+ 2025-09-26 13:39:41,946 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 38/100 | Train Loss: 0.0264 | Val rms_score: 0.7663
66
+ 2025-09-26 13:40:13,757 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 39/100 | Train Loss: 0.0265 | Val rms_score: 0.7627
67
+ 2025-09-26 13:40:45,474 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 40/100 | Train Loss: 0.0247 | Val rms_score: 0.7645
68
+ 2025-09-26 13:41:19,004 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 41/100 | Train Loss: 0.0254 | Val rms_score: 0.7659
69
+ 2025-09-26 13:41:52,197 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 42/100 | Train Loss: 0.0299 | Val rms_score: 0.7599
70
+ 2025-09-26 13:42:25,406 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 43/100 | Train Loss: 0.0232 | Val rms_score: 0.7553
71
+ 2025-09-26 13:42:58,320 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 44/100 | Train Loss: 0.0227 | Val rms_score: 0.7605
72
+ 2025-09-26 13:43:30,727 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 45/100 | Train Loss: 0.0222 | Val rms_score: 0.7627
73
+ 2025-09-26 13:44:03,291 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 46/100 | Train Loss: 0.0245 | Val rms_score: 0.7596
74
+ 2025-09-26 13:44:36,420 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 47/100 | Train Loss: 0.0243 | Val rms_score: 0.7560
75
+ 2025-09-26 13:45:09,120 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 48/100 | Train Loss: 0.0224 | Val rms_score: 0.7513
76
+ 2025-09-26 13:45:09,308 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Global step of best model: 5040
77
+ 2025-09-26 13:45:09,905 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Best model saved at epoch 48 with val rms_score: 0.7513
78
+ 2025-09-26 13:45:41,114 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 49/100 | Train Loss: 0.0238 | Val rms_score: 0.7489
79
+ 2025-09-26 13:45:41,311 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Global step of best model: 5145
80
+ 2025-09-26 13:45:41,910 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Best model saved at epoch 49 with val rms_score: 0.7489
81
+ 2025-09-26 13:46:15,685 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 50/100 | Train Loss: 0.0220 | Val rms_score: 0.7609
82
+ 2025-09-26 13:46:48,640 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 51/100 | Train Loss: 0.0223 | Val rms_score: 0.7695
83
+ 2025-09-26 13:47:21,713 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 52/100 | Train Loss: 0.0221 | Val rms_score: 0.7667
84
+ 2025-09-26 13:47:53,637 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 53/100 | Train Loss: 0.0215 | Val rms_score: 0.7635
85
+ 2025-09-26 13:48:26,497 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 54/100 | Train Loss: 0.0205 | Val rms_score: 0.7552
86
+ 2025-09-26 13:48:58,121 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 55/100 | Train Loss: 0.0200 | Val rms_score: 0.7526
87
+ 2025-09-26 13:49:30,385 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 56/100 | Train Loss: 0.0210 | Val rms_score: 0.7599
88
+ 2025-09-26 13:50:03,784 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 57/100 | Train Loss: 0.0219 | Val rms_score: 0.7587
89
+ 2025-09-26 13:50:37,970 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 58/100 | Train Loss: 0.0220 | Val rms_score: 0.7555
90
+ 2025-09-26 13:51:10,511 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 59/100 | Train Loss: 0.0214 | Val rms_score: 0.7510
91
+ 2025-09-26 13:51:43,869 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 60/100 | Train Loss: 0.0203 | Val rms_score: 0.7526
92
+ 2025-09-26 13:52:16,229 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 61/100 | Train Loss: 0.0191 | Val rms_score: 0.7525
93
+ 2025-09-26 13:52:49,790 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 62/100 | Train Loss: 0.0186 | Val rms_score: 0.7563
94
+ 2025-09-26 13:53:23,991 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 63/100 | Train Loss: 0.0163 | Val rms_score: 0.7568
95
+ 2025-09-26 13:53:57,529 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 64/100 | Train Loss: 0.0209 | Val rms_score: 0.7626
96
+ 2025-09-26 13:54:30,577 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 65/100 | Train Loss: 0.0182 | Val rms_score: 0.7586
97
+ 2025-09-26 13:55:03,964 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 66/100 | Train Loss: 0.0215 | Val rms_score: 0.7567
98
+ 2025-09-26 13:55:38,489 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 67/100 | Train Loss: 0.0196 | Val rms_score: 0.7592
99
+ 2025-09-26 13:56:11,625 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 68/100 | Train Loss: 0.0194 | Val rms_score: 0.7623
100
+ 2025-09-26 13:56:45,197 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 69/100 | Train Loss: 0.0191 | Val rms_score: 0.7523
101
+ 2025-09-26 13:57:18,174 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 70/100 | Train Loss: 0.0187 | Val rms_score: 0.7503
102
+ 2025-09-26 13:57:51,215 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 71/100 | Train Loss: 0.0182 | Val rms_score: 0.7581
103
+ 2025-09-26 13:58:25,775 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 72/100 | Train Loss: 0.0207 | Val rms_score: 0.7520
104
+ 2025-09-26 13:58:59,056 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 73/100 | Train Loss: 0.0181 | Val rms_score: 0.7537
105
+ 2025-09-26 13:59:32,537 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 74/100 | Train Loss: 0.0187 | Val rms_score: 0.7523
106
+ 2025-09-26 14:00:05,336 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 75/100 | Train Loss: 0.0180 | Val rms_score: 0.7523
107
+ 2025-09-26 14:00:38,214 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 76/100 | Train Loss: 0.0186 | Val rms_score: 0.7557
108
+ 2025-09-26 14:01:11,373 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 77/100 | Train Loss: 0.0184 | Val rms_score: 0.7588
109
+ 2025-09-26 14:01:43,753 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 78/100 | Train Loss: 0.0186 | Val rms_score: 0.7598
110
+ 2025-09-26 14:02:16,979 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 79/100 | Train Loss: 0.0187 | Val rms_score: 0.7550
111
+ 2025-09-26 14:02:50,366 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 80/100 | Train Loss: 0.0181 | Val rms_score: 0.7557
112
+ 2025-09-26 14:03:23,596 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 81/100 | Train Loss: 0.0180 | Val rms_score: 0.7520
113
+ 2025-09-26 14:03:57,303 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 82/100 | Train Loss: 0.0189 | Val rms_score: 0.7541
114
+ 2025-09-26 14:04:30,630 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 83/100 | Train Loss: 0.0203 | Val rms_score: 0.7483
115
+ 2025-09-26 14:04:30,786 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Global step of best model: 8715
116
+ 2025-09-26 14:04:31,436 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Best model saved at epoch 83 with val rms_score: 0.7483
117
+ 2025-09-26 14:05:03,731 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 84/100 | Train Loss: 0.0157 | Val rms_score: 0.7555
118
+ 2025-09-26 14:05:37,103 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 85/100 | Train Loss: 0.0162 | Val rms_score: 0.7488
119
+ 2025-09-26 14:06:11,237 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 86/100 | Train Loss: 0.0193 | Val rms_score: 0.7548
120
+ 2025-09-26 14:06:42,779 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 87/100 | Train Loss: 0.0175 | Val rms_score: 0.7511
121
+ 2025-09-26 14:07:15,986 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 88/100 | Train Loss: 0.0163 | Val rms_score: 0.7481
122
+ 2025-09-26 14:07:16,151 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Global step of best model: 9240
123
+ 2025-09-26 14:07:16,730 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Best model saved at epoch 88 with val rms_score: 0.7481
124
+ 2025-09-26 14:07:50,009 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 89/100 | Train Loss: 0.0171 | Val rms_score: 0.7512
125
+ 2025-09-26 14:08:22,993 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 90/100 | Train Loss: 0.0166 | Val rms_score: 0.7516
126
+ 2025-09-26 14:08:55,936 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 91/100 | Train Loss: 0.0168 | Val rms_score: 0.7595
127
+ 2025-09-26 14:09:28,253 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 92/100 | Train Loss: 0.0161 | Val rms_score: 0.7588
128
+ 2025-09-26 14:09:59,925 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 93/100 | Train Loss: 0.0163 | Val rms_score: 0.7547
129
+ 2025-09-26 14:10:29,956 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 94/100 | Train Loss: 0.0169 | Val rms_score: 0.7563
130
+ 2025-09-26 14:11:00,265 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 95/100 | Train Loss: 0.0173 | Val rms_score: 0.7518
131
+ 2025-09-26 14:11:34,751 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 96/100 | Train Loss: 0.0163 | Val rms_score: 0.7518
132
+ 2025-09-26 14:12:08,130 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 97/100 | Train Loss: 0.0165 | Val rms_score: 0.7543
133
+ 2025-09-26 14:12:41,091 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 98/100 | Train Loss: 0.0163 | Val rms_score: 0.7471
134
+ 2025-09-26 14:12:41,247 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Global step of best model: 10290
135
+ 2025-09-26 14:12:41,828 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Best model saved at epoch 98 with val rms_score: 0.7471
136
+ 2025-09-26 14:13:14,646 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 99/100 | Train Loss: 0.0163 | Val rms_score: 0.7519
137
+ 2025-09-26 14:13:47,406 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 100/100 | Train Loss: 0.0159 | Val rms_score: 0.7512
138
+ 2025-09-26 14:13:49,326 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Test rms_score: 0.8100
139
+ 2025-09-26 14:13:49,722 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Starting triplicate run 2 for dataset astrazeneca_logd74 at 2025-09-26_14-13-49
140
+ 2025-09-26 14:14:20,988 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 1/100 | Train Loss: 0.3500 | Val rms_score: 0.8481
141
+ 2025-09-26 14:14:20,988 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Global step of best model: 105
142
+ 2025-09-26 14:14:21,539 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Best model saved at epoch 1 with val rms_score: 0.8481
143
+ 2025-09-26 14:14:53,429 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 2/100 | Train Loss: 0.2969 | Val rms_score: 0.8235
144
+ 2025-09-26 14:14:53,611 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Global step of best model: 210
145
+ 2025-09-26 14:14:54,201 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Best model saved at epoch 2 with val rms_score: 0.8235
146
+ 2025-09-26 14:15:24,389 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 3/100 | Train Loss: 0.1917 | Val rms_score: 0.7838
147
+ 2025-09-26 14:15:24,569 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Global step of best model: 315
148
+ 2025-09-26 14:15:25,116 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Best model saved at epoch 3 with val rms_score: 0.7838
149
+ 2025-09-26 14:15:54,348 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 4/100 | Train Loss: 0.1766 | Val rms_score: 0.8013
150
+ 2025-09-26 14:16:26,662 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 5/100 | Train Loss: 0.1256 | Val rms_score: 0.7964
151
+ 2025-09-26 14:16:58,809 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 6/100 | Train Loss: 0.1354 | Val rms_score: 0.8123
152
+ 2025-09-26 14:17:29,852 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 7/100 | Train Loss: 0.1027 | Val rms_score: 0.8010
153
+ 2025-09-26 14:18:01,257 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 8/100 | Train Loss: 0.0863 | Val rms_score: 0.7951
154
+ 2025-09-26 14:18:31,949 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 9/100 | Train Loss: 0.0851 | Val rms_score: 0.7845
155
+ 2025-09-26 14:19:04,787 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 10/100 | Train Loss: 0.0784 | Val rms_score: 0.8023
156
+ 2025-09-26 14:19:35,977 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 11/100 | Train Loss: 0.0653 | Val rms_score: 0.7950
157
+ 2025-09-26 14:20:08,197 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 12/100 | Train Loss: 0.0638 | Val rms_score: 0.7773
158
+ 2025-09-26 14:20:08,350 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Global step of best model: 1260
159
+ 2025-09-26 14:20:08,896 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Best model saved at epoch 12 with val rms_score: 0.7773
160
+ 2025-09-26 14:20:39,326 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 13/100 | Train Loss: 0.0550 | Val rms_score: 0.7989
161
+ 2025-09-26 14:21:10,708 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 14/100 | Train Loss: 0.0529 | Val rms_score: 0.7899
162
+ 2025-09-26 14:21:41,920 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 15/100 | Train Loss: 0.0492 | Val rms_score: 0.7809
163
+ 2025-09-26 14:22:12,743 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 16/100 | Train Loss: 0.0535 | Val rms_score: 0.7832
164
+ 2025-09-26 14:22:44,053 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 17/100 | Train Loss: 0.0463 | Val rms_score: 0.7715
165
+ 2025-09-26 14:22:44,209 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Global step of best model: 1785
166
+ 2025-09-26 14:22:44,764 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Best model saved at epoch 17 with val rms_score: 0.7715
167
+ 2025-09-26 14:23:15,637 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 18/100 | Train Loss: 0.0434 | Val rms_score: 0.7752
168
+ 2025-09-26 14:23:47,375 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 19/100 | Train Loss: 0.0396 | Val rms_score: 0.7626
169
+ 2025-09-26 14:23:47,532 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Global step of best model: 1995
170
+ 2025-09-26 14:23:48,080 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Best model saved at epoch 19 with val rms_score: 0.7626
171
+ 2025-09-26 14:24:19,729 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 20/100 | Train Loss: 0.0400 | Val rms_score: 0.7680
172
+ 2025-09-26 14:24:51,016 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 21/100 | Train Loss: 0.0439 | Val rms_score: 0.7734
173
+ 2025-09-26 14:25:22,813 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 22/100 | Train Loss: 0.0316 | Val rms_score: 0.7754
174
+ 2025-09-26 14:25:54,397 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 23/100 | Train Loss: 0.0417 | Val rms_score: 0.7751
175
+ 2025-09-26 14:26:25,768 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 24/100 | Train Loss: 0.0344 | Val rms_score: 0.7708
176
+ 2025-09-26 14:26:55,642 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 25/100 | Train Loss: 0.0311 | Val rms_score: 0.7702
177
+ 2025-09-26 14:27:26,520 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 26/100 | Train Loss: 0.0349 | Val rms_score: 0.7657
178
+ 2025-09-26 14:27:58,639 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 27/100 | Train Loss: 0.0321 | Val rms_score: 0.7705
179
+ 2025-09-26 14:28:30,306 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 28/100 | Train Loss: 0.0314 | Val rms_score: 0.7738
180
+ 2025-09-26 14:29:03,238 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 29/100 | Train Loss: 0.0292 | Val rms_score: 0.7754
181
+ 2025-09-26 14:29:34,386 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 30/100 | Train Loss: 0.0297 | Val rms_score: 0.7735
182
+ 2025-09-26 14:30:06,512 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 31/100 | Train Loss: 0.0293 | Val rms_score: 0.7745
183
+ 2025-09-26 14:30:38,512 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 32/100 | Train Loss: 0.0268 | Val rms_score: 0.7709
184
+ 2025-09-26 14:31:09,041 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 33/100 | Train Loss: 0.0269 | Val rms_score: 0.7761
185
+ 2025-09-26 14:31:40,643 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 34/100 | Train Loss: 0.0263 | Val rms_score: 0.7730
186
+ 2025-09-26 14:32:12,542 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 35/100 | Train Loss: 0.0251 | Val rms_score: 0.7663
187
+ 2025-09-26 14:32:44,210 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 36/100 | Train Loss: 0.0254 | Val rms_score: 0.7682
188
+ 2025-09-26 14:33:16,264 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 37/100 | Train Loss: 0.0237 | Val rms_score: 0.7620
189
+ 2025-09-26 14:33:16,423 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Global step of best model: 3885
190
+ 2025-09-26 14:33:16,960 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Best model saved at epoch 37 with val rms_score: 0.7620
191
+ 2025-09-26 14:33:49,015 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 38/100 | Train Loss: 0.0231 | Val rms_score: 0.7701
192
+ 2025-09-26 14:34:21,075 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 39/100 | Train Loss: 0.0237 | Val rms_score: 0.7651
193
+ 2025-09-26 14:34:52,184 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 40/100 | Train Loss: 0.0238 | Val rms_score: 0.7674
194
+ 2025-09-26 14:35:23,156 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 41/100 | Train Loss: 0.0242 | Val rms_score: 0.7690
195
+ 2025-09-26 14:35:52,702 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 42/100 | Train Loss: 0.0179 | Val rms_score: 0.7632
196
+ 2025-09-26 14:36:24,841 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 43/100 | Train Loss: 0.0230 | Val rms_score: 0.7735
197
+ 2025-09-26 14:36:56,610 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 44/100 | Train Loss: 0.0225 | Val rms_score: 0.7719
198
+ 2025-09-26 14:37:28,293 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 45/100 | Train Loss: 0.0225 | Val rms_score: 0.7672
199
+ 2025-09-26 14:38:00,307 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 46/100 | Train Loss: 0.0208 | Val rms_score: 0.7685
200
+ 2025-09-26 14:38:32,647 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 47/100 | Train Loss: 0.0211 | Val rms_score: 0.7644
201
+ 2025-09-26 14:39:05,762 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 48/100 | Train Loss: 0.0208 | Val rms_score: 0.7700
202
+ 2025-09-26 14:39:33,551 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 49/100 | Train Loss: 0.0214 | Val rms_score: 0.7697
203
+ 2025-09-26 14:40:04,462 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 50/100 | Train Loss: 0.0219 | Val rms_score: 0.7569
204
+ 2025-09-26 14:40:04,614 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Global step of best model: 5250
205
+ 2025-09-26 14:40:05,172 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Best model saved at epoch 50 with val rms_score: 0.7569
206
+ 2025-09-26 14:40:36,541 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 51/100 | Train Loss: 0.0202 | Val rms_score: 0.7532
207
+ 2025-09-26 14:40:37,072 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Global step of best model: 5355
208
+ 2025-09-26 14:40:37,624 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Best model saved at epoch 51 with val rms_score: 0.7532
209
+ 2025-09-26 14:41:09,166 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 52/100 | Train Loss: 0.0199 | Val rms_score: 0.7538
210
+ 2025-09-26 14:41:41,597 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 53/100 | Train Loss: 0.0192 | Val rms_score: 0.7615
211
+ 2025-09-26 14:42:13,257 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 54/100 | Train Loss: 0.0210 | Val rms_score: 0.7651
212
+ 2025-09-26 14:42:45,581 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 55/100 | Train Loss: 0.0193 | Val rms_score: 0.7629
213
+ 2025-09-26 14:43:17,397 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 56/100 | Train Loss: 0.0188 | Val rms_score: 0.7565
214
+ 2025-09-26 14:43:49,729 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 57/100 | Train Loss: 0.0197 | Val rms_score: 0.7583
215
+ 2025-09-26 14:44:21,650 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 58/100 | Train Loss: 0.0197 | Val rms_score: 0.7591
216
+ 2025-09-26 14:44:51,166 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 59/100 | Train Loss: 0.0190 | Val rms_score: 0.7613
217
+ 2025-09-26 14:45:22,189 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 60/100 | Train Loss: 0.0187 | Val rms_score: 0.7625
218
+ 2025-09-26 14:45:52,781 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 61/100 | Train Loss: 0.0157 | Val rms_score: 0.7602
219
+ 2025-09-26 14:46:25,032 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 62/100 | Train Loss: 0.0175 | Val rms_score: 0.7494
220
+ 2025-09-26 14:46:25,189 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Global step of best model: 6510
221
+ 2025-09-26 14:46:25,773 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Best model saved at epoch 62 with val rms_score: 0.7494
222
+ 2025-09-26 14:46:57,990 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 63/100 | Train Loss: 0.0168 | Val rms_score: 0.7604
223
+ 2025-09-26 14:47:29,640 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 64/100 | Train Loss: 0.0185 | Val rms_score: 0.7575
224
+ 2025-09-26 14:48:01,821 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 65/100 | Train Loss: 0.0182 | Val rms_score: 0.7612
225
+ 2025-09-26 14:48:33,928 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 66/100 | Train Loss: 0.0204 | Val rms_score: 0.7538
226
+ 2025-09-26 14:49:07,368 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 67/100 | Train Loss: 0.0180 | Val rms_score: 0.7612
227
+ 2025-09-26 14:49:38,766 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 68/100 | Train Loss: 0.0185 | Val rms_score: 0.7591
228
+ 2025-09-26 14:50:09,109 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 69/100 | Train Loss: 0.0187 | Val rms_score: 0.7561
229
+ 2025-09-26 14:50:41,060 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 70/100 | Train Loss: 0.0166 | Val rms_score: 0.7567
230
+ 2025-09-26 14:51:12,337 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 71/100 | Train Loss: 0.0185 | Val rms_score: 0.7569
231
+ 2025-09-26 14:51:44,209 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 72/100 | Train Loss: 0.0163 | Val rms_score: 0.7658
232
+ 2025-09-26 14:52:16,314 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 73/100 | Train Loss: 0.0183 | Val rms_score: 0.7601
233
+ 2025-09-26 14:52:48,392 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 74/100 | Train Loss: 0.0172 | Val rms_score: 0.7587
234
+ 2025-09-26 14:53:19,740 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 75/100 | Train Loss: 0.0169 | Val rms_score: 0.7507
235
+ 2025-09-26 14:53:51,616 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 76/100 | Train Loss: 0.0166 | Val rms_score: 0.7613
236
+ 2025-09-26 14:54:24,212 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 77/100 | Train Loss: 0.0171 | Val rms_score: 0.7553
237
+ 2025-09-26 14:54:55,424 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 78/100 | Train Loss: 0.0159 | Val rms_score: 0.7585
238
+ 2025-09-26 14:55:25,579 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 79/100 | Train Loss: 0.0164 | Val rms_score: 0.7554
239
+ 2025-09-26 14:55:55,654 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 80/100 | Train Loss: 0.0168 | Val rms_score: 0.7576
240
+ 2025-09-26 14:56:27,255 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 81/100 | Train Loss: 0.0183 | Val rms_score: 0.7550
241
+ 2025-09-26 14:56:58,524 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 82/100 | Train Loss: 0.0126 | Val rms_score: 0.7576
242
+ 2025-09-26 14:57:30,423 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 83/100 | Train Loss: 0.0156 | Val rms_score: 0.7564
243
+ 2025-09-26 14:58:02,687 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 84/100 | Train Loss: 0.0176 | Val rms_score: 0.7541
244
+ 2025-09-26 14:58:34,170 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 85/100 | Train Loss: 0.0155 | Val rms_score: 0.7548
245
+ 2025-09-26 14:59:07,060 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 86/100 | Train Loss: 0.0163 | Val rms_score: 0.7504
246
+ 2025-09-26 14:59:37,955 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 87/100 | Train Loss: 0.0165 | Val rms_score: 0.7547
247
+ 2025-09-26 15:00:08,783 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 88/100 | Train Loss: 0.0170 | Val rms_score: 0.7569
248
+ 2025-09-26 15:00:40,771 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 89/100 | Train Loss: 0.0158 | Val rms_score: 0.7504
249
+ 2025-09-26 15:01:11,890 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 90/100 | Train Loss: 0.0163 | Val rms_score: 0.7490
250
+ 2025-09-26 15:01:12,049 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Global step of best model: 9450
251
+ 2025-09-26 15:01:12,626 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Best model saved at epoch 90 with val rms_score: 0.7490
252
+ 2025-09-26 15:01:44,077 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 91/100 | Train Loss: 0.0160 | Val rms_score: 0.7440
253
+ 2025-09-26 15:01:44,626 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Global step of best model: 9555
254
+ 2025-09-26 15:01:45,204 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Best model saved at epoch 91 with val rms_score: 0.7440
255
+ 2025-09-26 15:02:15,680 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 92/100 | Train Loss: 0.0156 | Val rms_score: 0.7510
256
+ 2025-09-26 15:02:45,750 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 93/100 | Train Loss: 0.0161 | Val rms_score: 0.7584
257
+ 2025-09-26 15:03:16,795 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 94/100 | Train Loss: 0.0156 | Val rms_score: 0.7533
258
+ 2025-09-26 15:03:48,031 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 95/100 | Train Loss: 0.0145 | Val rms_score: 0.7507
259
+ 2025-09-26 15:04:19,757 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 96/100 | Train Loss: 0.0152 | Val rms_score: 0.7551
260
+ 2025-09-26 15:04:50,666 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 97/100 | Train Loss: 0.0144 | Val rms_score: 0.7452
261
+ 2025-09-26 15:05:22,299 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 98/100 | Train Loss: 0.0156 | Val rms_score: 0.7522
262
+ 2025-09-26 15:05:53,265 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 99/100 | Train Loss: 0.0150 | Val rms_score: 0.7530
263
+ 2025-09-26 15:06:24,175 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 100/100 | Train Loss: 0.0152 | Val rms_score: 0.7519
264
+ 2025-09-26 15:06:25,725 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Test rms_score: 0.8253
265
+ 2025-09-26 15:06:26,286 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Starting triplicate run 3 for dataset astrazeneca_logd74 at 2025-09-26_15-06-26
266
+ 2025-09-26 15:06:55,881 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 1/100 | Train Loss: 0.3578 | Val rms_score: 0.8702
267
+ 2025-09-26 15:06:55,881 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Global step of best model: 105
268
+ 2025-09-26 15:06:56,701 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Best model saved at epoch 1 with val rms_score: 0.8702
269
+ 2025-09-26 15:07:26,757 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 2/100 | Train Loss: 0.2812 | Val rms_score: 0.8497
270
+ 2025-09-26 15:07:26,944 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Global step of best model: 210
271
+ 2025-09-26 15:07:27,471 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Best model saved at epoch 2 with val rms_score: 0.8497
272
+ 2025-09-26 15:07:56,497 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 3/100 | Train Loss: 0.1823 | Val rms_score: 0.8469
273
+ 2025-09-26 15:07:56,675 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Global step of best model: 315
274
+ 2025-09-26 15:07:57,202 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Best model saved at epoch 3 with val rms_score: 0.8469
275
+ 2025-09-26 15:08:27,937 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 4/100 | Train Loss: 0.1781 | Val rms_score: 0.8166
276
+ 2025-09-26 15:08:28,095 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Global step of best model: 420
277
+ 2025-09-26 15:08:28,629 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Best model saved at epoch 4 with val rms_score: 0.8166
278
+ 2025-09-26 15:09:00,043 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 5/100 | Train Loss: 0.1625 | Val rms_score: 0.8511
279
+ 2025-09-26 15:09:31,244 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 6/100 | Train Loss: 0.1234 | Val rms_score: 0.8122
280
+ 2025-09-26 15:09:31,785 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Global step of best model: 630
281
+ 2025-09-26 15:09:32,365 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Best model saved at epoch 6 with val rms_score: 0.8122
282
+ 2025-09-26 15:10:03,428 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 7/100 | Train Loss: 0.1161 | Val rms_score: 0.8091
283
+ 2025-09-26 15:10:03,644 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Global step of best model: 735
284
+ 2025-09-26 15:10:04,263 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Best model saved at epoch 7 with val rms_score: 0.8091
285
+ 2025-09-26 15:10:33,960 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 8/100 | Train Loss: 0.0977 | Val rms_score: 0.8013
286
+ 2025-09-26 15:10:34,140 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Global step of best model: 840
287
+ 2025-09-26 15:10:34,670 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Best model saved at epoch 8 with val rms_score: 0.8013
288
+ 2025-09-26 15:11:05,911 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 9/100 | Train Loss: 0.0924 | Val rms_score: 0.7991
289
+ 2025-09-26 15:11:06,064 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Global step of best model: 945
290
+ 2025-09-26 15:11:06,591 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Best model saved at epoch 9 with val rms_score: 0.7991
291
+ 2025-09-26 15:11:38,381 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 10/100 | Train Loss: 0.0747 | Val rms_score: 0.7981
292
+ 2025-09-26 15:11:38,531 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Global step of best model: 1050
293
+ 2025-09-26 15:11:39,067 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Best model saved at epoch 10 with val rms_score: 0.7981
294
+ 2025-09-26 15:12:10,241 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 11/100 | Train Loss: 0.0750 | Val rms_score: 0.7918
295
+ 2025-09-26 15:12:10,772 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Global step of best model: 1155
296
+ 2025-09-26 15:12:11,306 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Best model saved at epoch 11 with val rms_score: 0.7918
297
+ 2025-09-26 15:12:43,976 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 12/100 | Train Loss: 0.0729 | Val rms_score: 0.7889
298
+ 2025-09-26 15:12:44,163 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Global step of best model: 1260
299
+ 2025-09-26 15:12:44,773 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Best model saved at epoch 12 with val rms_score: 0.7889
300
+ 2025-09-26 15:13:10,458 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 13/100 | Train Loss: 0.0659 | Val rms_score: 0.7754
301
+ 2025-09-26 15:13:10,650 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Global step of best model: 1365
302
+ 2025-09-26 15:13:11,196 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Best model saved at epoch 13 with val rms_score: 0.7754
303
+ 2025-09-26 15:13:24,741 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 14/100 | Train Loss: 0.0621 | Val rms_score: 0.7874
304
+ 2025-09-26 15:13:38,763 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 15/100 | Train Loss: 0.0537 | Val rms_score: 0.7774
305
+ 2025-09-26 15:13:53,060 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 16/100 | Train Loss: 0.0484 | Val rms_score: 0.7825
306
+ 2025-09-26 15:14:08,416 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 17/100 | Train Loss: 0.0515 | Val rms_score: 0.7769
307
+ 2025-09-26 15:14:22,243 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 18/100 | Train Loss: 0.0490 | Val rms_score: 0.7766
308
+ 2025-09-26 15:14:35,699 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 19/100 | Train Loss: 0.0441 | Val rms_score: 0.7815
309
+ 2025-09-26 15:14:50,952 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 20/100 | Train Loss: 0.0425 | Val rms_score: 0.7710
310
+ 2025-09-26 15:14:51,109 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Global step of best model: 2100
311
+ 2025-09-26 15:14:51,696 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Best model saved at epoch 20 with val rms_score: 0.7710
312
+ 2025-09-26 15:15:05,511 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 21/100 | Train Loss: 0.0389 | Val rms_score: 0.7734
313
+ 2025-09-26 15:15:19,334 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 22/100 | Train Loss: 0.0428 | Val rms_score: 0.7716
314
+ 2025-09-26 15:15:32,746 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 23/100 | Train Loss: 0.0354 | Val rms_score: 0.7751
315
+ 2025-09-26 15:15:45,738 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 24/100 | Train Loss: 0.0344 | Val rms_score: 0.7862
316
+ 2025-09-26 15:15:58,911 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 25/100 | Train Loss: 0.0356 | Val rms_score: 0.7816
317
+ 2025-09-26 15:16:11,867 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 26/100 | Train Loss: 0.0370 | Val rms_score: 0.7665
318
+ 2025-09-26 15:16:12,426 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Global step of best model: 2730
319
+ 2025-09-26 15:16:12,985 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Best model saved at epoch 26 with val rms_score: 0.7665
320
+ 2025-09-26 15:16:26,041 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 27/100 | Train Loss: 0.0299 | Val rms_score: 0.7680
321
+ 2025-09-26 15:16:39,057 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 28/100 | Train Loss: 0.0326 | Val rms_score: 0.7668
322
+ 2025-09-26 15:16:52,596 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 29/100 | Train Loss: 0.0326 | Val rms_score: 0.7643
323
+ 2025-09-26 15:16:52,753 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Global step of best model: 3045
324
+ 2025-09-26 15:16:53,378 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Best model saved at epoch 29 with val rms_score: 0.7643
325
+ 2025-09-26 15:17:06,535 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 30/100 | Train Loss: 0.0338 | Val rms_score: 0.7785
326
+ 2025-09-26 15:17:19,288 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 31/100 | Train Loss: 0.0310 | Val rms_score: 0.7699
327
+ 2025-09-26 15:17:33,100 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 32/100 | Train Loss: 0.0310 | Val rms_score: 0.7718
328
+ 2025-09-26 15:17:46,418 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 33/100 | Train Loss: 0.0294 | Val rms_score: 0.7760
329
+ 2025-09-26 15:17:59,759 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 34/100 | Train Loss: 0.0269 | Val rms_score: 0.7749
330
+ 2025-09-26 15:18:11,347 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 35/100 | Train Loss: 0.0281 | Val rms_score: 0.7706
331
+ 2025-09-26 15:18:21,737 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 36/100 | Train Loss: 0.0285 | Val rms_score: 0.7716
332
+ 2025-09-26 15:18:32,703 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 37/100 | Train Loss: 0.0263 | Val rms_score: 0.7721
333
+ 2025-09-26 15:18:42,880 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 38/100 | Train Loss: 0.0283 | Val rms_score: 0.7714
334
+ 2025-09-26 15:18:54,515 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 39/100 | Train Loss: 0.0268 | Val rms_score: 0.7652
335
+ 2025-09-26 15:19:04,645 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 40/100 | Train Loss: 0.0253 | Val rms_score: 0.7605
336
+ 2025-09-26 15:19:04,800 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Global step of best model: 4200
337
+ 2025-09-26 15:19:05,414 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Best model saved at epoch 40 with val rms_score: 0.7605
338
+ 2025-09-26 15:19:15,612 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 41/100 | Train Loss: 0.0309 | Val rms_score: 0.7657
339
+ 2025-09-26 15:19:26,346 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 42/100 | Train Loss: 0.0287 | Val rms_score: 0.7614
340
+ 2025-09-26 15:19:36,308 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 43/100 | Train Loss: 0.0227 | Val rms_score: 0.7622
341
+ 2025-09-26 15:19:46,100 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 44/100 | Train Loss: 0.0235 | Val rms_score: 0.7620
342
+ 2025-09-26 15:19:55,862 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 45/100 | Train Loss: 0.0238 | Val rms_score: 0.7587
343
+ 2025-09-26 15:19:56,025 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Global step of best model: 4725
344
+ 2025-09-26 15:19:56,654 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Best model saved at epoch 45 with val rms_score: 0.7587
345
+ 2025-09-26 15:20:06,280 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 46/100 | Train Loss: 0.0229 | Val rms_score: 0.7635
346
+ 2025-09-26 15:20:16,156 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 47/100 | Train Loss: 0.0235 | Val rms_score: 0.7656
347
+ 2025-09-26 15:20:26,884 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 48/100 | Train Loss: 0.0231 | Val rms_score: 0.7600
348
+ 2025-09-26 15:20:36,467 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 49/100 | Train Loss: 0.0236 | Val rms_score: 0.7570
349
+ 2025-09-26 15:20:36,628 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Global step of best model: 5145
350
+ 2025-09-26 15:20:37,240 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Best model saved at epoch 49 with val rms_score: 0.7570
351
+ 2025-09-26 15:20:47,830 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 50/100 | Train Loss: 0.0242 | Val rms_score: 0.7684
352
+ 2025-09-26 15:20:58,442 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 51/100 | Train Loss: 0.0247 | Val rms_score: 0.7539
353
+ 2025-09-26 15:20:59,118 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Global step of best model: 5355
354
+ 2025-09-26 15:20:59,788 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Best model saved at epoch 51 with val rms_score: 0.7539
355
+ 2025-09-26 15:21:10,461 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 52/100 | Train Loss: 0.0228 | Val rms_score: 0.7593
356
+ 2025-09-26 15:21:20,904 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 53/100 | Train Loss: 0.0224 | Val rms_score: 0.7614
357
+ 2025-09-26 15:21:31,097 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 54/100 | Train Loss: 0.0232 | Val rms_score: 0.7636
358
+ 2025-09-26 15:21:41,261 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 55/100 | Train Loss: 0.0218 | Val rms_score: 0.7585
359
+ 2025-09-26 15:21:51,310 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 56/100 | Train Loss: 0.0217 | Val rms_score: 0.7554
360
+ 2025-09-26 15:22:01,805 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 57/100 | Train Loss: 0.0201 | Val rms_score: 0.7637
361
+ 2025-09-26 15:22:12,872 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 58/100 | Train Loss: 0.0210 | Val rms_score: 0.7577
362
+ 2025-09-26 15:22:23,109 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 59/100 | Train Loss: 0.0205 | Val rms_score: 0.7616
363
+ 2025-09-26 15:22:33,917 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 60/100 | Train Loss: 0.0203 | Val rms_score: 0.7602
364
+ 2025-09-26 15:22:44,097 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 61/100 | Train Loss: 0.0192 | Val rms_score: 0.7637
365
+ 2025-09-26 15:22:55,014 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 62/100 | Train Loss: 0.0213 | Val rms_score: 0.7553
366
+ 2025-09-26 15:23:05,277 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 63/100 | Train Loss: 0.0194 | Val rms_score: 0.7558
367
+ 2025-09-26 15:23:15,761 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 64/100 | Train Loss: 0.0213 | Val rms_score: 0.7618
368
+ 2025-09-26 15:23:25,995 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 65/100 | Train Loss: 0.0206 | Val rms_score: 0.7587
369
+ 2025-09-26 15:23:35,947 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 66/100 | Train Loss: 0.0182 | Val rms_score: 0.7579
370
+ 2025-09-26 15:23:47,773 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 67/100 | Train Loss: 0.0210 | Val rms_score: 0.7579
371
+ 2025-09-26 15:23:58,535 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 68/100 | Train Loss: 0.0192 | Val rms_score: 0.7586
372
+ 2025-09-26 15:24:09,596 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 69/100 | Train Loss: 0.0191 | Val rms_score: 0.7579
373
+ 2025-09-26 15:24:20,546 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 70/100 | Train Loss: 0.0194 | Val rms_score: 0.7589
374
+ 2025-09-26 15:24:30,815 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 71/100 | Train Loss: 0.0190 | Val rms_score: 0.7659
375
+ 2025-09-26 15:24:42,109 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 72/100 | Train Loss: 0.0182 | Val rms_score: 0.7575
376
+ 2025-09-26 15:24:52,385 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 73/100 | Train Loss: 0.0195 | Val rms_score: 0.7601
377
+ 2025-09-26 15:25:02,546 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 74/100 | Train Loss: 0.0180 | Val rms_score: 0.7557
378
+ 2025-09-26 15:25:12,789 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 75/100 | Train Loss: 0.0189 | Val rms_score: 0.7513
379
+ 2025-09-26 15:25:12,954 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Global step of best model: 7875
380
+ 2025-09-26 15:25:13,556 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Best model saved at epoch 75 with val rms_score: 0.7513
381
+ 2025-09-26 15:25:23,582 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 76/100 | Train Loss: 0.0185 | Val rms_score: 0.7603
382
+ 2025-09-26 15:25:35,318 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 77/100 | Train Loss: 0.0182 | Val rms_score: 0.7584
383
+ 2025-09-26 15:25:45,650 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 78/100 | Train Loss: 0.0187 | Val rms_score: 0.7626
384
+ 2025-09-26 15:25:55,917 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 79/100 | Train Loss: 0.0181 | Val rms_score: 0.7548
385
+ 2025-09-26 15:26:06,124 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 80/100 | Train Loss: 0.0180 | Val rms_score: 0.7581
386
+ 2025-09-26 15:26:16,831 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 81/100 | Train Loss: 0.0217 | Val rms_score: 0.7544
387
+ 2025-09-26 15:26:27,736 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 82/100 | Train Loss: 0.0179 | Val rms_score: 0.7514
388
+ 2025-09-26 15:26:38,137 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 83/100 | Train Loss: 0.0190 | Val rms_score: 0.7557
389
+ 2025-09-26 15:26:49,380 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 84/100 | Train Loss: 0.0181 | Val rms_score: 0.7576
390
+ 2025-09-26 15:26:59,871 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 85/100 | Train Loss: 0.0192 | Val rms_score: 0.7658
391
+ 2025-09-26 15:27:12,046 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 86/100 | Train Loss: 0.0185 | Val rms_score: 0.7608
392
+ 2025-09-26 15:27:24,707 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 87/100 | Train Loss: 0.0170 | Val rms_score: 0.7568
393
+ 2025-09-26 15:27:36,130 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 88/100 | Train Loss: 0.0185 | Val rms_score: 0.7606
394
+ 2025-09-26 15:27:47,301 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 89/100 | Train Loss: 0.0173 | Val rms_score: 0.7575
395
+ 2025-09-26 15:27:58,519 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 90/100 | Train Loss: 0.0177 | Val rms_score: 0.7540
396
+ 2025-09-26 15:28:09,538 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 91/100 | Train Loss: 0.0164 | Val rms_score: 0.7551
397
+ 2025-09-26 15:28:21,544 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 92/100 | Train Loss: 0.0176 | Val rms_score: 0.7531
398
+ 2025-09-26 15:28:32,492 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 93/100 | Train Loss: 0.0167 | Val rms_score: 0.7539
399
+ 2025-09-26 15:28:43,529 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 94/100 | Train Loss: 0.0169 | Val rms_score: 0.7594
400
+ 2025-09-26 15:28:54,293 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 95/100 | Train Loss: 0.0185 | Val rms_score: 0.7530
401
+ 2025-09-26 15:29:05,715 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 96/100 | Train Loss: 0.0173 | Val rms_score: 0.7510
402
+ 2025-09-26 15:29:06,361 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Global step of best model: 10080
403
+ 2025-09-26 15:29:07,023 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Best model saved at epoch 96 with val rms_score: 0.7510
404
+ 2025-09-26 15:29:17,934 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 97/100 | Train Loss: 0.0175 | Val rms_score: 0.7547
405
+ 2025-09-26 15:29:28,754 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 98/100 | Train Loss: 0.0174 | Val rms_score: 0.7501
406
+ 2025-09-26 15:29:28,936 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Global step of best model: 10290
407
+ 2025-09-26 15:29:29,608 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Best model saved at epoch 98 with val rms_score: 0.7501
408
+ 2025-09-26 15:29:40,341 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 99/100 | Train Loss: 0.0167 | Val rms_score: 0.7508
409
+ 2025-09-26 15:29:50,245 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 100/100 | Train Loss: 0.0177 | Val rms_score: 0.7513
410
+ 2025-09-26 15:29:51,038 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Test rms_score: 0.8221
411
+ 2025-09-26 15:29:51,729 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Final Triplicate Test Results — Avg rms_score: 0.8191, Std Dev: 0.0066
logs_modchembert_regression_ModChemBERT-MLM-DAPT/modchembert_deepchem_splits_run_astrazeneca_ppb_epochs100_batch_size32_20250926_152951.log ADDED
@@ -0,0 +1,327 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ 2025-09-26 15:29:51,730 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Running benchmark for dataset: astrazeneca_ppb
2
+ 2025-09-26 15:29:51,731 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - dataset: astrazeneca_ppb, tasks: ['y'], epochs: 100, learning rate: 3e-05, transform: True
3
+ 2025-09-26 15:29:51,735 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Starting triplicate run 1 for dataset astrazeneca_ppb at 2025-09-26_15-29-51
4
+ 2025-09-26 15:29:56,201 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 1/100 | Train Loss: 0.8000 | Val rms_score: 0.1287
5
+ 2025-09-26 15:29:56,201 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Global step of best model: 45
6
+ 2025-09-26 15:29:57,208 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Best model saved at epoch 1 with val rms_score: 0.1287
7
+ 2025-09-26 15:30:02,291 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 2/100 | Train Loss: 0.4639 | Val rms_score: 0.1237
8
+ 2025-09-26 15:30:02,468 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Global step of best model: 90
9
+ 2025-09-26 15:30:03,068 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Best model saved at epoch 2 with val rms_score: 0.1237
10
+ 2025-09-26 15:30:08,727 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 3/100 | Train Loss: 0.3268 | Val rms_score: 0.1194
11
+ 2025-09-26 15:30:08,907 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Global step of best model: 135
12
+ 2025-09-26 15:30:09,490 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Best model saved at epoch 3 with val rms_score: 0.1194
13
+ 2025-09-26 15:30:14,859 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 4/100 | Train Loss: 0.3250 | Val rms_score: 0.1204
14
+ 2025-09-26 15:30:20,491 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 5/100 | Train Loss: 0.2350 | Val rms_score: 0.1233
15
+ 2025-09-26 15:30:25,565 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 6/100 | Train Loss: 0.2139 | Val rms_score: 0.1206
16
+ 2025-09-26 15:30:31,862 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 7/100 | Train Loss: 0.2146 | Val rms_score: 0.1224
17
+ 2025-09-26 15:30:37,674 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 8/100 | Train Loss: 0.1757 | Val rms_score: 0.1264
18
+ 2025-09-26 15:30:43,243 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 9/100 | Train Loss: 0.1875 | Val rms_score: 0.1245
19
+ 2025-09-26 15:30:48,440 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 10/100 | Train Loss: 0.1333 | Val rms_score: 0.1241
20
+ 2025-09-26 15:30:53,537 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 11/100 | Train Loss: 0.1250 | Val rms_score: 0.1266
21
+ 2025-09-26 15:30:59,689 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 12/100 | Train Loss: 0.1258 | Val rms_score: 0.1209
22
+ 2025-09-26 15:31:05,296 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 13/100 | Train Loss: 0.1181 | Val rms_score: 0.1241
23
+ 2025-09-26 15:31:10,815 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 14/100 | Train Loss: 0.1016 | Val rms_score: 0.1228
24
+ 2025-09-26 15:31:15,911 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 15/100 | Train Loss: 0.0882 | Val rms_score: 0.1208
25
+ 2025-09-26 15:31:21,083 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 16/100 | Train Loss: 0.0910 | Val rms_score: 0.1254
26
+ 2025-09-26 15:31:27,250 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 17/100 | Train Loss: 0.0858 | Val rms_score: 0.1229
27
+ 2025-09-26 15:31:33,131 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 18/100 | Train Loss: 0.1148 | Val rms_score: 0.1278
28
+ 2025-09-26 15:31:38,711 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 19/100 | Train Loss: 0.0910 | Val rms_score: 0.1217
29
+ 2025-09-26 15:31:43,896 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 20/100 | Train Loss: 0.0774 | Val rms_score: 0.1230
30
+ 2025-09-26 15:31:48,504 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 21/100 | Train Loss: 0.0788 | Val rms_score: 0.1231
31
+ 2025-09-26 15:31:54,075 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 22/100 | Train Loss: 0.0764 | Val rms_score: 0.1270
32
+ 2025-09-26 15:32:00,423 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 23/100 | Train Loss: 0.0616 | Val rms_score: 0.1226
33
+ 2025-09-26 15:32:05,970 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 24/100 | Train Loss: 0.0594 | Val rms_score: 0.1225
34
+ 2025-09-26 15:32:11,187 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 25/100 | Train Loss: 0.0550 | Val rms_score: 0.1241
35
+ 2025-09-26 15:32:16,540 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 26/100 | Train Loss: 0.0493 | Val rms_score: 0.1221
36
+ 2025-09-26 15:32:22,564 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 27/100 | Train Loss: 0.0469 | Val rms_score: 0.1243
37
+ 2025-09-26 15:32:28,138 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 28/100 | Train Loss: 0.0490 | Val rms_score: 0.1246
38
+ 2025-09-26 15:32:33,889 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 29/100 | Train Loss: 0.0430 | Val rms_score: 0.1241
39
+ 2025-09-26 15:32:39,035 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 30/100 | Train Loss: 0.0524 | Val rms_score: 0.1257
40
+ 2025-09-26 15:32:44,251 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 31/100 | Train Loss: 0.0465 | Val rms_score: 0.1243
41
+ 2025-09-26 15:32:50,413 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 32/100 | Train Loss: 0.0480 | Val rms_score: 0.1235
42
+ 2025-09-26 15:32:55,940 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 33/100 | Train Loss: 0.0444 | Val rms_score: 0.1225
43
+ 2025-09-26 15:33:01,567 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 34/100 | Train Loss: 0.0474 | Val rms_score: 0.1249
44
+ 2025-09-26 15:33:06,749 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 35/100 | Train Loss: 0.0385 | Val rms_score: 0.1220
45
+ 2025-09-26 15:33:12,022 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 36/100 | Train Loss: 0.0494 | Val rms_score: 0.1239
46
+ 2025-09-26 15:33:18,123 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 37/100 | Train Loss: 0.0375 | Val rms_score: 0.1241
47
+ 2025-09-26 15:33:23,779 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 38/100 | Train Loss: 0.0400 | Val rms_score: 0.1232
48
+ 2025-09-26 15:33:29,388 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 39/100 | Train Loss: 0.0354 | Val rms_score: 0.1218
49
+ 2025-09-26 15:33:34,568 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 40/100 | Train Loss: 0.0347 | Val rms_score: 0.1223
50
+ 2025-09-26 15:33:39,831 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 41/100 | Train Loss: 0.0314 | Val rms_score: 0.1215
51
+ 2025-09-26 15:33:45,992 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 42/100 | Train Loss: 0.0340 | Val rms_score: 0.1214
52
+ 2025-09-26 15:33:50,954 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 43/100 | Train Loss: 0.0337 | Val rms_score: 0.1218
53
+ 2025-09-26 15:33:55,968 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 44/100 | Train Loss: 0.0325 | Val rms_score: 0.1215
54
+ 2025-09-26 15:34:01,805 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 45/100 | Train Loss: 0.0300 | Val rms_score: 0.1226
55
+ 2025-09-26 15:34:07,148 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 46/100 | Train Loss: 0.0326 | Val rms_score: 0.1214
56
+ 2025-09-26 15:34:13,251 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 47/100 | Train Loss: 0.0448 | Val rms_score: 0.1244
57
+ 2025-09-26 15:34:18,884 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 48/100 | Train Loss: 0.0328 | Val rms_score: 0.1217
58
+ 2025-09-26 15:34:24,547 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 49/100 | Train Loss: 0.0408 | Val rms_score: 0.1221
59
+ 2025-09-26 15:34:29,676 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 50/100 | Train Loss: 0.0302 | Val rms_score: 0.1212
60
+ 2025-09-26 15:34:34,918 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 51/100 | Train Loss: 0.0280 | Val rms_score: 0.1225
61
+ 2025-09-26 15:34:41,367 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 52/100 | Train Loss: 0.0279 | Val rms_score: 0.1211
62
+ 2025-09-26 15:34:47,159 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 53/100 | Train Loss: 0.0273 | Val rms_score: 0.1216
63
+ 2025-09-26 15:34:52,808 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 54/100 | Train Loss: 0.0281 | Val rms_score: 0.1212
64
+ 2025-09-26 15:34:58,122 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 55/100 | Train Loss: 0.0297 | Val rms_score: 0.1212
65
+ 2025-09-26 15:35:03,524 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 56/100 | Train Loss: 0.0279 | Val rms_score: 0.1219
66
+ 2025-09-26 15:35:09,789 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 57/100 | Train Loss: 0.0260 | Val rms_score: 0.1228
67
+ 2025-09-26 15:35:15,434 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 58/100 | Train Loss: 0.0229 | Val rms_score: 0.1207
68
+ 2025-09-26 15:35:20,934 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 59/100 | Train Loss: 0.0257 | Val rms_score: 0.1227
69
+ 2025-09-26 15:35:26,306 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 60/100 | Train Loss: 0.0241 | Val rms_score: 0.1214
70
+ 2025-09-26 15:35:31,715 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 61/100 | Train Loss: 0.0229 | Val rms_score: 0.1211
71
+ 2025-09-26 15:35:37,881 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 62/100 | Train Loss: 0.0245 | Val rms_score: 0.1212
72
+ 2025-09-26 15:35:43,471 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 63/100 | Train Loss: 0.0250 | Val rms_score: 0.1209
73
+ 2025-09-26 15:35:48,753 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 64/100 | Train Loss: 0.0250 | Val rms_score: 0.1226
74
+ 2025-09-26 15:35:53,852 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 65/100 | Train Loss: 0.0233 | Val rms_score: 0.1213
75
+ 2025-09-26 15:35:58,821 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 66/100 | Train Loss: 0.0267 | Val rms_score: 0.1213
76
+ 2025-09-26 15:36:05,560 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 67/100 | Train Loss: 0.0294 | Val rms_score: 0.1218
77
+ 2025-09-26 15:36:11,249 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 68/100 | Train Loss: 0.0276 | Val rms_score: 0.1208
78
+ 2025-09-26 15:36:16,904 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 69/100 | Train Loss: 0.0167 | Val rms_score: 0.1202
79
+ 2025-09-26 15:36:22,188 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 70/100 | Train Loss: 0.0236 | Val rms_score: 0.1204
80
+ 2025-09-26 15:36:27,406 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 71/100 | Train Loss: 0.0216 | Val rms_score: 0.1210
81
+ 2025-09-26 15:36:33,684 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 72/100 | Train Loss: 0.0227 | Val rms_score: 0.1208
82
+ 2025-09-26 15:36:39,298 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 73/100 | Train Loss: 0.0221 | Val rms_score: 0.1212
83
+ 2025-09-26 15:36:44,735 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 74/100 | Train Loss: 0.0237 | Val rms_score: 0.1200
84
+ 2025-09-26 15:36:49,949 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 75/100 | Train Loss: 0.0234 | Val rms_score: 0.1209
85
+ 2025-09-26 15:36:55,165 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 76/100 | Train Loss: 0.0216 | Val rms_score: 0.1212
86
+ 2025-09-26 15:37:01,283 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 77/100 | Train Loss: 0.0259 | Val rms_score: 0.1212
87
+ 2025-09-26 15:37:06,699 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 78/100 | Train Loss: 0.0268 | Val rms_score: 0.1211
88
+ 2025-09-26 15:37:11,963 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 79/100 | Train Loss: 0.0218 | Val rms_score: 0.1210
89
+ 2025-09-26 15:37:17,038 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 80/100 | Train Loss: 0.0210 | Val rms_score: 0.1217
90
+ 2025-09-26 15:37:22,228 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 81/100 | Train Loss: 0.0252 | Val rms_score: 0.1213
91
+ 2025-09-26 15:37:27,829 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 82/100 | Train Loss: 0.0212 | Val rms_score: 0.1217
92
+ 2025-09-26 15:37:32,996 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 83/100 | Train Loss: 0.0194 | Val rms_score: 0.1202
93
+ 2025-09-26 15:37:38,077 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 84/100 | Train Loss: 0.0191 | Val rms_score: 0.1203
94
+ 2025-09-26 15:37:43,205 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 85/100 | Train Loss: 0.0228 | Val rms_score: 0.1196
95
+ 2025-09-26 15:37:48,333 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 86/100 | Train Loss: 0.0196 | Val rms_score: 0.1220
96
+ 2025-09-26 15:37:54,274 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 87/100 | Train Loss: 0.0174 | Val rms_score: 0.1212
97
+ 2025-09-26 15:37:59,290 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 88/100 | Train Loss: 0.0206 | Val rms_score: 0.1206
98
+ 2025-09-26 15:38:05,226 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 89/100 | Train Loss: 0.0194 | Val rms_score: 0.1200
99
+ 2025-09-26 15:38:10,425 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 90/100 | Train Loss: 0.0197 | Val rms_score: 0.1222
100
+ 2025-09-26 15:38:15,698 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 91/100 | Train Loss: 0.0201 | Val rms_score: 0.1210
101
+ 2025-09-26 15:38:21,756 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 92/100 | Train Loss: 0.0194 | Val rms_score: 0.1212
102
+ 2025-09-26 15:38:27,399 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 93/100 | Train Loss: 0.0196 | Val rms_score: 0.1218
103
+ 2025-09-26 15:38:33,396 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 94/100 | Train Loss: 0.0173 | Val rms_score: 0.1220
104
+ 2025-09-26 15:38:38,802 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 95/100 | Train Loss: 0.0174 | Val rms_score: 0.1217
105
+ 2025-09-26 15:38:44,104 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 96/100 | Train Loss: 0.0167 | Val rms_score: 0.1204
106
+ 2025-09-26 15:38:50,272 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 97/100 | Train Loss: 0.0175 | Val rms_score: 0.1224
107
+ 2025-09-26 15:38:55,905 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 98/100 | Train Loss: 0.0201 | Val rms_score: 0.1203
108
+ 2025-09-26 15:39:01,026 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 99/100 | Train Loss: 0.0205 | Val rms_score: 0.1209
109
+ 2025-09-26 15:39:09,097 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 100/100 | Train Loss: 0.0181 | Val rms_score: 0.1195
110
+ 2025-09-26 15:39:09,700 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Test rms_score: 0.1233
111
+ 2025-09-26 15:39:10,227 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Starting triplicate run 2 for dataset astrazeneca_ppb at 2025-09-26_15-39-10
112
+ 2025-09-26 15:39:15,668 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 1/100 | Train Loss: 0.8000 | Val rms_score: 0.1222
113
+ 2025-09-26 15:39:15,668 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Global step of best model: 45
114
+ 2025-09-26 15:39:17,525 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Best model saved at epoch 1 with val rms_score: 0.1222
115
+ 2025-09-26 15:39:24,883 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 2/100 | Train Loss: 0.4500 | Val rms_score: 0.1193
116
+ 2025-09-26 15:39:25,039 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Global step of best model: 90
117
+ 2025-09-26 15:39:26,361 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Best model saved at epoch 2 with val rms_score: 0.1193
118
+ 2025-09-26 15:39:31,740 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 3/100 | Train Loss: 0.3661 | Val rms_score: 0.1202
119
+ 2025-09-26 15:39:37,210 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 4/100 | Train Loss: 0.2653 | Val rms_score: 0.1209
120
+ 2025-09-26 15:39:43,014 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 5/100 | Train Loss: 0.1963 | Val rms_score: 0.1213
121
+ 2025-09-26 15:39:48,671 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 6/100 | Train Loss: 0.2000 | Val rms_score: 0.1218
122
+ 2025-09-26 15:39:54,549 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 7/100 | Train Loss: 0.2052 | Val rms_score: 0.1175
123
+ 2025-09-26 15:39:54,741 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Global step of best model: 315
124
+ 2025-09-26 15:39:55,349 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Best model saved at epoch 7 with val rms_score: 0.1175
125
+ 2025-09-26 15:40:00,560 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 8/100 | Train Loss: 0.1757 | Val rms_score: 0.1239
126
+ 2025-09-26 15:40:05,462 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 9/100 | Train Loss: 0.1562 | Val rms_score: 0.1247
127
+ 2025-09-26 15:40:10,617 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 10/100 | Train Loss: 0.1403 | Val rms_score: 0.1228
128
+ 2025-09-26 15:40:15,784 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 11/100 | Train Loss: 0.1222 | Val rms_score: 0.1233
129
+ 2025-09-26 15:40:21,532 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 12/100 | Train Loss: 0.1078 | Val rms_score: 0.1226
130
+ 2025-09-26 15:40:26,632 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 13/100 | Train Loss: 0.0979 | Val rms_score: 0.1226
131
+ 2025-09-26 15:40:31,472 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 14/100 | Train Loss: 0.1031 | Val rms_score: 0.1234
132
+ 2025-09-26 15:40:36,479 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 15/100 | Train Loss: 0.0806 | Val rms_score: 0.1254
133
+ 2025-09-26 15:40:41,418 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 16/100 | Train Loss: 0.1078 | Val rms_score: 0.1233
134
+ 2025-09-26 15:40:46,629 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 17/100 | Train Loss: 0.0809 | Val rms_score: 0.1240
135
+ 2025-09-26 15:40:51,663 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 18/100 | Train Loss: 0.0762 | Val rms_score: 0.1223
136
+ 2025-09-26 15:40:56,699 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 19/100 | Train Loss: 0.0663 | Val rms_score: 0.1224
137
+ 2025-09-26 15:41:02,008 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 20/100 | Train Loss: 0.0663 | Val rms_score: 0.1260
138
+ 2025-09-26 15:41:07,047 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 21/100 | Train Loss: 0.0635 | Val rms_score: 0.1243
139
+ 2025-09-26 15:41:13,711 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 22/100 | Train Loss: 0.0615 | Val rms_score: 0.1238
140
+ 2025-09-26 15:41:19,710 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 23/100 | Train Loss: 0.0589 | Val rms_score: 0.1221
141
+ 2025-09-26 15:41:24,384 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 24/100 | Train Loss: 0.0566 | Val rms_score: 0.1182
142
+ 2025-09-26 15:41:29,258 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 25/100 | Train Loss: 0.0509 | Val rms_score: 0.1244
143
+ 2025-09-26 15:41:33,947 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 26/100 | Train Loss: 0.0545 | Val rms_score: 0.1207
144
+ 2025-09-26 15:41:39,634 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 27/100 | Train Loss: 0.0518 | Val rms_score: 0.1188
145
+ 2025-09-26 15:41:44,579 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 28/100 | Train Loss: 0.0476 | Val rms_score: 0.1205
146
+ 2025-09-26 15:41:49,142 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 29/100 | Train Loss: 0.0426 | Val rms_score: 0.1200
147
+ 2025-09-26 15:41:54,208 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 30/100 | Train Loss: 0.0417 | Val rms_score: 0.1205
148
+ 2025-09-26 15:41:59,242 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 31/100 | Train Loss: 0.0443 | Val rms_score: 0.1196
149
+ 2025-09-26 15:42:04,267 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 32/100 | Train Loss: 0.0373 | Val rms_score: 0.1211
150
+ 2025-09-26 15:42:08,820 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 33/100 | Train Loss: 0.0398 | Val rms_score: 0.1201
151
+ 2025-09-26 15:42:13,496 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 34/100 | Train Loss: 0.0536 | Val rms_score: 0.1179
152
+ 2025-09-26 15:42:18,580 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 35/100 | Train Loss: 0.0439 | Val rms_score: 0.1191
153
+ 2025-09-26 15:42:23,629 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 36/100 | Train Loss: 0.0408 | Val rms_score: 0.1189
154
+ 2025-09-26 15:42:29,089 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 37/100 | Train Loss: 0.0342 | Val rms_score: 0.1211
155
+ 2025-09-26 15:42:33,801 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 38/100 | Train Loss: 0.0285 | Val rms_score: 0.1209
156
+ 2025-09-26 15:42:38,248 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 39/100 | Train Loss: 0.0339 | Val rms_score: 0.1195
157
+ 2025-09-26 15:42:42,895 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 40/100 | Train Loss: 0.0328 | Val rms_score: 0.1206
158
+ 2025-09-26 15:42:47,770 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 41/100 | Train Loss: 0.0316 | Val rms_score: 0.1199
159
+ 2025-09-26 15:42:52,873 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 42/100 | Train Loss: 0.0325 | Val rms_score: 0.1231
160
+ 2025-09-26 15:42:57,711 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 43/100 | Train Loss: 0.0366 | Val rms_score: 0.1201
161
+ 2025-09-26 15:43:02,810 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 44/100 | Train Loss: 0.0356 | Val rms_score: 0.1218
162
+ 2025-09-26 15:43:08,979 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 45/100 | Train Loss: 0.0328 | Val rms_score: 0.1215
163
+ 2025-09-26 15:43:13,966 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 46/100 | Train Loss: 0.0314 | Val rms_score: 0.1197
164
+ 2025-09-26 15:43:19,216 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 47/100 | Train Loss: 0.0344 | Val rms_score: 0.1212
165
+ 2025-09-26 15:43:23,973 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 48/100 | Train Loss: 0.0293 | Val rms_score: 0.1206
166
+ 2025-09-26 15:43:28,685 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 49/100 | Train Loss: 0.0215 | Val rms_score: 0.1214
167
+ 2025-09-26 15:43:33,292 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 50/100 | Train Loss: 0.0276 | Val rms_score: 0.1216
168
+ 2025-09-26 15:43:37,896 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 51/100 | Train Loss: 0.0266 | Val rms_score: 0.1237
169
+ 2025-09-26 15:43:42,883 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 52/100 | Train Loss: 0.0248 | Val rms_score: 0.1214
170
+ 2025-09-26 15:43:47,571 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 53/100 | Train Loss: 0.0240 | Val rms_score: 0.1232
171
+ 2025-09-26 15:43:51,911 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 54/100 | Train Loss: 0.0305 | Val rms_score: 0.1217
172
+ 2025-09-26 15:43:56,667 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 55/100 | Train Loss: 0.0283 | Val rms_score: 0.1215
173
+ 2025-09-26 15:44:01,701 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 56/100 | Train Loss: 0.0285 | Val rms_score: 0.1211
174
+ 2025-09-26 15:44:06,751 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 57/100 | Train Loss: 0.0293 | Val rms_score: 0.1220
175
+ 2025-09-26 15:44:10,474 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 58/100 | Train Loss: 0.0283 | Val rms_score: 0.1218
176
+ 2025-09-26 15:44:14,786 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 59/100 | Train Loss: 0.0252 | Val rms_score: 0.1216
177
+ 2025-09-26 15:44:19,354 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 60/100 | Train Loss: 0.0208 | Val rms_score: 0.1219
178
+ 2025-09-26 15:44:24,020 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 61/100 | Train Loss: 0.0243 | Val rms_score: 0.1241
179
+ 2025-09-26 15:44:29,194 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 62/100 | Train Loss: 0.0241 | Val rms_score: 0.1215
180
+ 2025-09-26 15:44:34,156 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 63/100 | Train Loss: 0.0249 | Val rms_score: 0.1207
181
+ 2025-09-26 15:44:38,915 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 64/100 | Train Loss: 0.0253 | Val rms_score: 0.1218
182
+ 2025-09-26 15:44:42,650 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 65/100 | Train Loss: 0.0202 | Val rms_score: 0.1199
183
+ 2025-09-26 15:44:47,936 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 66/100 | Train Loss: 0.0224 | Val rms_score: 0.1221
184
+ 2025-09-26 15:44:54,466 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 67/100 | Train Loss: 0.0229 | Val rms_score: 0.1224
185
+ 2025-09-26 15:44:59,119 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 68/100 | Train Loss: 0.0201 | Val rms_score: 0.1217
186
+ 2025-09-26 15:45:03,604 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 69/100 | Train Loss: 0.0477 | Val rms_score: 0.1217
187
+ 2025-09-26 15:45:08,343 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 70/100 | Train Loss: 0.0288 | Val rms_score: 0.1212
188
+ 2025-09-26 15:45:13,058 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 71/100 | Train Loss: 0.0208 | Val rms_score: 0.1217
189
+ 2025-09-26 15:45:17,364 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 72/100 | Train Loss: 0.0209 | Val rms_score: 0.1203
190
+ 2025-09-26 15:45:22,011 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 73/100 | Train Loss: 0.0207 | Val rms_score: 0.1226
191
+ 2025-09-26 15:45:26,747 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 74/100 | Train Loss: 0.0221 | Val rms_score: 0.1231
192
+ 2025-09-26 15:45:31,594 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 75/100 | Train Loss: 0.0222 | Val rms_score: 0.1210
193
+ 2025-09-26 15:45:36,405 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 76/100 | Train Loss: 0.0211 | Val rms_score: 0.1223
194
+ 2025-09-26 15:45:41,550 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 77/100 | Train Loss: 0.0192 | Val rms_score: 0.1217
195
+ 2025-09-26 15:45:45,327 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 78/100 | Train Loss: 0.0197 | Val rms_score: 0.1210
196
+ 2025-09-26 15:45:49,824 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 79/100 | Train Loss: 0.0188 | Val rms_score: 0.1208
197
+ 2025-09-26 15:45:54,494 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 80/100 | Train Loss: 0.0185 | Val rms_score: 0.1215
198
+ 2025-09-26 15:45:59,361 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 81/100 | Train Loss: 0.0200 | Val rms_score: 0.1206
199
+ 2025-09-26 15:46:04,908 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 82/100 | Train Loss: 0.0200 | Val rms_score: 0.1210
200
+ 2025-09-26 15:46:09,776 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 83/100 | Train Loss: 0.0210 | Val rms_score: 0.1211
201
+ 2025-09-26 15:46:14,683 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 84/100 | Train Loss: 0.0176 | Val rms_score: 0.1215
202
+ 2025-09-26 15:46:18,817 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 85/100 | Train Loss: 0.0159 | Val rms_score: 0.1214
203
+ 2025-09-26 15:46:23,691 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 86/100 | Train Loss: 0.0179 | Val rms_score: 0.1221
204
+ 2025-09-26 15:46:28,650 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 87/100 | Train Loss: 0.0197 | Val rms_score: 0.1216
205
+ 2025-09-26 15:46:33,947 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 88/100 | Train Loss: 0.0175 | Val rms_score: 0.1212
206
+ 2025-09-26 15:46:39,891 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 89/100 | Train Loss: 0.0145 | Val rms_score: 0.1217
207
+ 2025-09-26 15:46:44,712 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 90/100 | Train Loss: 0.0176 | Val rms_score: 0.1208
208
+ 2025-09-26 15:46:48,556 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 91/100 | Train Loss: 0.0165 | Val rms_score: 0.1219
209
+ 2025-09-26 15:46:53,695 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 92/100 | Train Loss: 0.0168 | Val rms_score: 0.1207
210
+ 2025-09-26 15:46:58,430 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 93/100 | Train Loss: 0.0178 | Val rms_score: 0.1192
211
+ 2025-09-26 15:47:02,987 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 94/100 | Train Loss: 0.0178 | Val rms_score: 0.1205
212
+ 2025-09-26 15:47:07,685 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 95/100 | Train Loss: 0.0181 | Val rms_score: 0.1200
213
+ 2025-09-26 15:47:12,312 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 96/100 | Train Loss: 0.0157 | Val rms_score: 0.1201
214
+ 2025-09-26 15:47:17,482 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 97/100 | Train Loss: 0.0166 | Val rms_score: 0.1204
215
+ 2025-09-26 15:47:20,929 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 98/100 | Train Loss: 0.0252 | Val rms_score: 0.1198
216
+ 2025-09-26 15:47:28,483 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 99/100 | Train Loss: 0.0166 | Val rms_score: 0.1205
217
+ 2025-09-26 15:47:32,926 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 100/100 | Train Loss: 0.0158 | Val rms_score: 0.1204
218
+ 2025-09-26 15:47:33,750 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Test rms_score: 0.1265
219
+ 2025-09-26 15:47:34,272 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Starting triplicate run 3 for dataset astrazeneca_ppb at 2025-09-26_15-47-34
220
+ 2025-09-26 15:47:38,015 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 1/100 | Train Loss: 0.9111 | Val rms_score: 0.1333
221
+ 2025-09-26 15:47:38,015 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Global step of best model: 45
222
+ 2025-09-26 15:47:41,188 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Best model saved at epoch 1 with val rms_score: 0.1333
223
+ 2025-09-26 15:47:45,576 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 2/100 | Train Loss: 0.4722 | Val rms_score: 0.1223
224
+ 2025-09-26 15:47:45,775 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Global step of best model: 90
225
+ 2025-09-26 15:47:46,382 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Best model saved at epoch 2 with val rms_score: 0.1223
226
+ 2025-09-26 15:47:50,562 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 3/100 | Train Loss: 0.4357 | Val rms_score: 0.1250
227
+ 2025-09-26 15:47:55,417 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 4/100 | Train Loss: 0.3097 | Val rms_score: 0.1228
228
+ 2025-09-26 15:48:00,404 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 5/100 | Train Loss: 0.2275 | Val rms_score: 0.1237
229
+ 2025-09-26 15:48:05,355 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 6/100 | Train Loss: 0.2069 | Val rms_score: 0.1248
230
+ 2025-09-26 15:48:10,614 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 7/100 | Train Loss: 0.2302 | Val rms_score: 0.1257
231
+ 2025-09-26 15:48:15,589 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 8/100 | Train Loss: 0.1632 | Val rms_score: 0.1241
232
+ 2025-09-26 15:48:20,642 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 9/100 | Train Loss: 0.1484 | Val rms_score: 0.1192
233
+ 2025-09-26 15:48:20,789 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Global step of best model: 405
234
+ 2025-09-26 15:48:20,366 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Best model saved at epoch 9 with val rms_score: 0.1192
235
+ 2025-09-26 15:48:25,428 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 10/100 | Train Loss: 0.1174 | Val rms_score: 0.1240
236
+ 2025-09-26 15:48:30,146 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 11/100 | Train Loss: 0.1326 | Val rms_score: 0.1223
237
+ 2025-09-26 15:48:35,367 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 12/100 | Train Loss: 0.1219 | Val rms_score: 0.1227
238
+ 2025-09-26 15:48:40,567 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 13/100 | Train Loss: 0.1125 | Val rms_score: 0.1241
239
+ 2025-09-26 15:48:45,236 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 14/100 | Train Loss: 0.1047 | Val rms_score: 0.1223
240
+ 2025-09-26 15:48:50,290 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 15/100 | Train Loss: 0.0958 | Val rms_score: 0.1247
241
+ 2025-09-26 15:48:54,213 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 16/100 | Train Loss: 0.0965 | Val rms_score: 0.1255
242
+ 2025-09-26 15:49:00,148 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 17/100 | Train Loss: 0.0903 | Val rms_score: 0.1247
243
+ 2025-09-26 15:49:05,311 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 18/100 | Train Loss: 0.0719 | Val rms_score: 0.1241
244
+ 2025-09-26 15:49:10,121 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 19/100 | Train Loss: 0.0854 | Val rms_score: 0.1233
245
+ 2025-09-26 15:49:15,411 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 20/100 | Train Loss: 0.0743 | Val rms_score: 0.1238
246
+ 2025-09-26 15:49:22,067 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 21/100 | Train Loss: 0.0698 | Val rms_score: 0.1214
247
+ 2025-09-26 15:49:27,198 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 22/100 | Train Loss: 0.0698 | Val rms_score: 0.1254
248
+ 2025-09-26 15:49:33,234 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 23/100 | Train Loss: 0.0580 | Val rms_score: 0.1228
249
+ 2025-09-26 15:49:37,951 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 24/100 | Train Loss: 0.0531 | Val rms_score: 0.1243
250
+ 2025-09-26 15:49:42,821 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 25/100 | Train Loss: 0.0537 | Val rms_score: 0.1244
251
+ 2025-09-26 15:49:48,280 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 26/100 | Train Loss: 0.0517 | Val rms_score: 0.1227
252
+ 2025-09-26 15:49:53,691 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 27/100 | Train Loss: 0.0510 | Val rms_score: 0.1233
253
+ 2025-09-26 15:49:57,634 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 28/100 | Train Loss: 0.0514 | Val rms_score: 0.1235
254
+ 2025-09-26 15:50:02,931 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 29/100 | Train Loss: 0.0586 | Val rms_score: 0.1231
255
+ 2025-09-26 15:50:08,455 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 30/100 | Train Loss: 0.0500 | Val rms_score: 0.1250
256
+ 2025-09-26 15:50:13,839 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 31/100 | Train Loss: 0.0483 | Val rms_score: 0.1220
257
+ 2025-09-26 15:50:20,244 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 32/100 | Train Loss: 0.0486 | Val rms_score: 0.1237
258
+ 2025-09-26 15:50:25,895 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 33/100 | Train Loss: 0.0497 | Val rms_score: 0.1242
259
+ 2025-09-26 15:50:29,806 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 34/100 | Train Loss: 0.0435 | Val rms_score: 0.1229
260
+ 2025-09-26 15:50:34,746 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 35/100 | Train Loss: 0.0417 | Val rms_score: 0.1205
261
+ 2025-09-26 15:50:39,656 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 36/100 | Train Loss: 0.0326 | Val rms_score: 0.1209
262
+ 2025-09-26 15:50:45,447 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 37/100 | Train Loss: 0.0410 | Val rms_score: 0.1227
263
+ 2025-09-26 15:50:50,530 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 38/100 | Train Loss: 0.0369 | Val rms_score: 0.1231
264
+ 2025-09-26 15:50:55,716 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 39/100 | Train Loss: 0.0328 | Val rms_score: 0.1231
265
+ 2025-09-26 15:50:59,228 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 40/100 | Train Loss: 0.0378 | Val rms_score: 0.1223
266
+ 2025-09-26 15:51:04,153 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 41/100 | Train Loss: 0.0363 | Val rms_score: 0.1234
267
+ 2025-09-26 15:51:09,815 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 42/100 | Train Loss: 0.0387 | Val rms_score: 0.1246
268
+ 2025-09-26 15:51:14,659 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 43/100 | Train Loss: 0.0379 | Val rms_score: 0.1241
269
+ 2025-09-26 15:51:19,402 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 44/100 | Train Loss: 0.0349 | Val rms_score: 0.1234
270
+ 2025-09-26 15:51:25,417 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 45/100 | Train Loss: 0.0366 | Val rms_score: 0.1255
271
+ 2025-09-26 15:51:29,119 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 46/100 | Train Loss: 0.0382 | Val rms_score: 0.1263
272
+ 2025-09-26 15:51:35,058 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 47/100 | Train Loss: 0.0324 | Val rms_score: 0.1225
273
+ 2025-09-26 15:51:40,760 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 48/100 | Train Loss: 0.0333 | Val rms_score: 0.1241
274
+ 2025-09-26 15:51:46,573 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 49/100 | Train Loss: 0.0285 | Val rms_score: 0.1238
275
+ 2025-09-26 15:51:51,664 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 50/100 | Train Loss: 0.0290 | Val rms_score: 0.1235
276
+ 2025-09-26 15:51:56,507 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 51/100 | Train Loss: 0.0297 | Val rms_score: 0.1234
277
+ 2025-09-26 15:52:00,701 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 52/100 | Train Loss: 0.0287 | Val rms_score: 0.1220
278
+ 2025-09-26 15:52:06,095 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 53/100 | Train Loss: 0.0304 | Val rms_score: 0.1239
279
+ 2025-09-26 15:52:11,366 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 54/100 | Train Loss: 0.0290 | Val rms_score: 0.1227
280
+ 2025-09-26 15:52:16,470 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 55/100 | Train Loss: 0.0262 | Val rms_score: 0.1219
281
+ 2025-09-26 15:52:21,541 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 56/100 | Train Loss: 0.0324 | Val rms_score: 0.1225
282
+ 2025-09-26 15:52:26,989 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 57/100 | Train Loss: 0.0288 | Val rms_score: 0.1214
283
+ 2025-09-26 15:52:30,977 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 58/100 | Train Loss: 0.0264 | Val rms_score: 0.1219
284
+ 2025-09-26 15:52:36,108 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 59/100 | Train Loss: 0.0274 | Val rms_score: 0.1231
285
+ 2025-09-26 15:52:41,102 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 60/100 | Train Loss: 0.0248 | Val rms_score: 0.1229
286
+ 2025-09-26 15:52:46,534 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 61/100 | Train Loss: 0.0233 | Val rms_score: 0.1233
287
+ 2025-09-26 15:52:52,201 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 62/100 | Train Loss: 0.0238 | Val rms_score: 0.1227
288
+ 2025-09-26 15:52:57,440 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 63/100 | Train Loss: 0.0283 | Val rms_score: 0.1216
289
+ 2025-09-26 15:53:01,346 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 64/100 | Train Loss: 0.0286 | Val rms_score: 0.1222
290
+ 2025-09-26 15:53:06,751 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 65/100 | Train Loss: 0.0255 | Val rms_score: 0.1223
291
+ 2025-09-26 15:53:11,422 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 66/100 | Train Loss: 0.0233 | Val rms_score: 0.1231
292
+ 2025-09-26 15:53:17,514 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 67/100 | Train Loss: 0.0234 | Val rms_score: 0.1220
293
+ 2025-09-26 15:53:22,487 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 68/100 | Train Loss: 0.0253 | Val rms_score: 0.1218
294
+ 2025-09-26 15:53:27,438 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 69/100 | Train Loss: 0.0186 | Val rms_score: 0.1238
295
+ 2025-09-26 15:53:30,960 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 70/100 | Train Loss: 0.0231 | Val rms_score: 0.1216
296
+ 2025-09-26 15:53:35,770 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 71/100 | Train Loss: 0.0221 | Val rms_score: 0.1216
297
+ 2025-09-26 15:53:41,159 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 72/100 | Train Loss: 0.0252 | Val rms_score: 0.1231
298
+ 2025-09-26 15:53:46,175 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 73/100 | Train Loss: 0.0234 | Val rms_score: 0.1231
299
+ 2025-09-26 15:53:51,284 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 74/100 | Train Loss: 0.0254 | Val rms_score: 0.1223
300
+ 2025-09-26 15:53:56,052 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 75/100 | Train Loss: 0.0209 | Val rms_score: 0.1228
301
+ 2025-09-26 15:54:01,310 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 76/100 | Train Loss: 0.0250 | Val rms_score: 0.1217
302
+ 2025-09-26 15:54:05,737 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 77/100 | Train Loss: 0.0218 | Val rms_score: 0.1228
303
+ 2025-09-26 15:54:11,082 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 78/100 | Train Loss: 0.0219 | Val rms_score: 0.1219
304
+ 2025-09-26 15:54:16,387 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 79/100 | Train Loss: 0.0218 | Val rms_score: 0.1198
305
+ 2025-09-26 15:54:21,556 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 80/100 | Train Loss: 0.0224 | Val rms_score: 0.1221
306
+ 2025-09-26 15:54:26,662 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 81/100 | Train Loss: 0.0215 | Val rms_score: 0.1212
307
+ 2025-09-26 15:54:33,153 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 82/100 | Train Loss: 0.0203 | Val rms_score: 0.1220
308
+ 2025-09-26 15:54:36,985 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 83/100 | Train Loss: 0.0203 | Val rms_score: 0.1220
309
+ 2025-09-26 15:54:42,167 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 84/100 | Train Loss: 0.0226 | Val rms_score: 0.1220
310
+ 2025-09-26 15:54:47,409 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 85/100 | Train Loss: 0.0184 | Val rms_score: 0.1215
311
+ 2025-09-26 15:54:52,443 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 86/100 | Train Loss: 0.0192 | Val rms_score: 0.1225
312
+ 2025-09-26 15:54:58,188 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 87/100 | Train Loss: 0.0172 | Val rms_score: 0.1217
313
+ 2025-09-26 15:55:03,081 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 88/100 | Train Loss: 0.0192 | Val rms_score: 0.1227
314
+ 2025-09-26 15:55:07,721 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 89/100 | Train Loss: 0.0307 | Val rms_score: 0.1222
315
+ 2025-09-26 15:55:12,720 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 90/100 | Train Loss: 0.0220 | Val rms_score: 0.1194
316
+ 2025-09-26 15:55:18,401 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 91/100 | Train Loss: 0.0212 | Val rms_score: 0.1211
317
+ 2025-09-26 15:55:24,192 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 92/100 | Train Loss: 0.0208 | Val rms_score: 0.1220
318
+ 2025-09-26 15:55:29,410 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 93/100 | Train Loss: 0.0208 | Val rms_score: 0.1211
319
+ 2025-09-26 15:55:34,483 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 94/100 | Train Loss: 0.0187 | Val rms_score: 0.1216
320
+ 2025-09-26 15:55:37,944 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 95/100 | Train Loss: 0.0179 | Val rms_score: 0.1215
321
+ 2025-09-26 15:55:43,033 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 96/100 | Train Loss: 0.0146 | Val rms_score: 0.1216
322
+ 2025-09-26 15:55:49,200 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 97/100 | Train Loss: 0.0194 | Val rms_score: 0.1229
323
+ 2025-09-26 15:55:53,830 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 98/100 | Train Loss: 0.0188 | Val rms_score: 0.1224
324
+ 2025-09-26 15:55:58,484 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 99/100 | Train Loss: 0.0204 | Val rms_score: 0.1217
325
+ 2025-09-26 15:56:05,250 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 100/100 | Train Loss: 0.0209 | Val rms_score: 0.1225
326
+ 2025-09-26 15:56:05,751 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Test rms_score: 0.1212
327
+ 2025-09-26 15:56:06,232 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Final Triplicate Test Results — Avg rms_score: 0.1237, Std Dev: 0.0022
logs_modchembert_regression_ModChemBERT-MLM-DAPT/modchembert_deepchem_splits_run_astrazeneca_solubility_epochs100_batch_size32_20250926_155606.log ADDED
@@ -0,0 +1,379 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ 2025-09-26 15:56:06,234 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Running benchmark for dataset: astrazeneca_solubility
2
+ 2025-09-26 15:56:06,234 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - dataset: astrazeneca_solubility, tasks: ['y'], epochs: 100, learning rate: 3e-05, transform: True
3
+ 2025-09-26 15:56:06,239 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Starting triplicate run 1 for dataset astrazeneca_solubility at 2025-09-26_15-56-06
4
+ 2025-09-26 15:56:11,105 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 1/100 | Train Loss: 0.9556 | Val rms_score: 0.9198
5
+ 2025-09-26 15:56:11,105 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Global step of best model: 45
6
+ 2025-09-26 15:56:15,568 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 1 with val rms_score: 0.9198
7
+ 2025-09-26 15:56:21,482 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 2/100 | Train Loss: 0.5944 | Val rms_score: 0.9036
8
+ 2025-09-26 15:56:21,705 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Global step of best model: 90
9
+ 2025-09-26 15:56:22,920 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 2 with val rms_score: 0.9036
10
+ 2025-09-26 15:56:28,136 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 3/100 | Train Loss: 0.4929 | Val rms_score: 0.9392
11
+ 2025-09-26 15:56:33,214 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 4/100 | Train Loss: 0.4333 | Val rms_score: 1.0123
12
+ 2025-09-26 15:56:37,259 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 5/100 | Train Loss: 0.4450 | Val rms_score: 0.8988
13
+ 2025-09-26 15:56:36,119 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Global step of best model: 225
14
+ 2025-09-26 15:56:36,842 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 5 with val rms_score: 0.8988
15
+ 2025-09-26 15:56:41,578 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 6/100 | Train Loss: 0.3306 | Val rms_score: 0.9005
16
+ 2025-09-26 15:56:46,962 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 7/100 | Train Loss: 0.3083 | Val rms_score: 0.8822
17
+ 2025-09-26 15:56:47,153 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Global step of best model: 315
18
+ 2025-09-26 15:56:47,713 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 7 with val rms_score: 0.8822
19
+ 2025-09-26 15:56:54,248 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 8/100 | Train Loss: 0.2458 | Val rms_score: 0.9098
20
+ 2025-09-26 15:57:00,047 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 9/100 | Train Loss: 0.2687 | Val rms_score: 0.8945
21
+ 2025-09-26 15:57:06,084 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 10/100 | Train Loss: 0.2417 | Val rms_score: 0.9091
22
+ 2025-09-26 15:57:09,976 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 11/100 | Train Loss: 0.1944 | Val rms_score: 0.8907
23
+ 2025-09-26 15:57:15,819 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 12/100 | Train Loss: 0.2328 | Val rms_score: 0.9029
24
+ 2025-09-26 15:57:21,295 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 13/100 | Train Loss: 0.1611 | Val rms_score: 1.0089
25
+ 2025-09-26 15:57:26,681 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 14/100 | Train Loss: 0.1729 | Val rms_score: 0.9563
26
+ 2025-09-26 15:57:31,755 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 15/100 | Train Loss: 0.1604 | Val rms_score: 0.9749
27
+ 2025-09-26 15:57:36,857 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 16/100 | Train Loss: 0.1305 | Val rms_score: 0.8925
28
+ 2025-09-26 15:57:41,481 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 17/100 | Train Loss: 0.1194 | Val rms_score: 0.8719
29
+ 2025-09-26 15:57:41,637 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Global step of best model: 765
30
+ 2025-09-26 15:57:42,273 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 17 with val rms_score: 0.8719
31
+ 2025-09-26 15:57:47,244 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 18/100 | Train Loss: 0.0902 | Val rms_score: 0.8957
32
+ 2025-09-26 15:57:52,163 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 19/100 | Train Loss: 0.1049 | Val rms_score: 0.9008
33
+ 2025-09-26 15:57:57,385 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 20/100 | Train Loss: 0.1215 | Val rms_score: 0.8984
34
+ 2025-09-26 15:58:02,148 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 21/100 | Train Loss: 0.1903 | Val rms_score: 0.8694
35
+ 2025-09-26 15:58:02,756 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Global step of best model: 945
36
+ 2025-09-26 15:58:03,344 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 21 with val rms_score: 0.8694
37
+ 2025-09-26 15:58:08,418 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 22/100 | Train Loss: 0.1104 | Val rms_score: 0.9016
38
+ 2025-09-26 15:58:13,002 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 23/100 | Train Loss: 0.0826 | Val rms_score: 0.8556
39
+ 2025-09-26 15:58:13,217 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Global step of best model: 1035
40
+ 2025-09-26 15:58:13,841 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 23 with val rms_score: 0.8556
41
+ 2025-09-26 15:58:18,827 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 24/100 | Train Loss: 0.0896 | Val rms_score: 0.8713
42
+ 2025-09-26 15:58:24,023 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 25/100 | Train Loss: 0.0881 | Val rms_score: 0.8783
43
+ 2025-09-26 15:58:29,453 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 26/100 | Train Loss: 0.0931 | Val rms_score: 0.8689
44
+ 2025-09-26 15:58:35,293 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 27/100 | Train Loss: 0.0880 | Val rms_score: 0.8680
45
+ 2025-09-26 15:58:40,794 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 28/100 | Train Loss: 0.0750 | Val rms_score: 0.8674
46
+ 2025-09-26 15:58:44,461 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 29/100 | Train Loss: 0.0852 | Val rms_score: 0.8786
47
+ 2025-09-26 15:58:49,585 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 30/100 | Train Loss: 0.0885 | Val rms_score: 0.8666
48
+ 2025-09-26 15:58:55,101 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 31/100 | Train Loss: 0.0663 | Val rms_score: 0.8662
49
+ 2025-09-26 15:59:00,942 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 32/100 | Train Loss: 0.0621 | Val rms_score: 0.8619
50
+ 2025-09-26 15:59:06,098 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 33/100 | Train Loss: 0.0639 | Val rms_score: 0.8582
51
+ 2025-09-26 15:59:10,903 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 34/100 | Train Loss: 0.0599 | Val rms_score: 0.8631
52
+ 2025-09-26 15:59:14,381 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 35/100 | Train Loss: 0.0618 | Val rms_score: 0.8806
53
+ 2025-09-26 15:59:19,272 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 36/100 | Train Loss: 0.0547 | Val rms_score: 0.8942
54
+ 2025-09-26 15:59:24,595 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 37/100 | Train Loss: 0.0503 | Val rms_score: 0.8815
55
+ 2025-09-26 15:59:29,442 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 38/100 | Train Loss: 0.0455 | Val rms_score: 0.8761
56
+ 2025-09-26 15:59:33,934 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 39/100 | Train Loss: 0.0462 | Val rms_score: 0.8779
57
+ 2025-09-26 15:59:38,852 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 40/100 | Train Loss: 0.0476 | Val rms_score: 0.8613
58
+ 2025-09-26 15:59:42,241 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 41/100 | Train Loss: 0.0524 | Val rms_score: 0.8736
59
+ 2025-09-26 15:59:47,550 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 42/100 | Train Loss: 0.0434 | Val rms_score: 0.8919
60
+ 2025-09-26 15:59:52,280 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 43/100 | Train Loss: 0.0518 | Val rms_score: 0.8880
61
+ 2025-09-26 15:59:56,833 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 44/100 | Train Loss: 0.0677 | Val rms_score: 0.8756
62
+ 2025-09-26 16:00:03,091 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 45/100 | Train Loss: 0.0500 | Val rms_score: 0.8730
63
+ 2025-09-26 16:00:08,229 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 46/100 | Train Loss: 0.0517 | Val rms_score: 0.8706
64
+ 2025-09-26 16:00:13,489 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 47/100 | Train Loss: 0.0510 | Val rms_score: 0.8640
65
+ 2025-09-26 16:00:16,877 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 48/100 | Train Loss: 0.0405 | Val rms_score: 0.8661
66
+ 2025-09-26 16:00:21,444 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 49/100 | Train Loss: 0.0311 | Val rms_score: 0.8644
67
+ 2025-09-26 16:00:26,180 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 50/100 | Train Loss: 0.0361 | Val rms_score: 0.8785
68
+ 2025-09-26 16:00:30,903 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 51/100 | Train Loss: 0.0382 | Val rms_score: 0.8663
69
+ 2025-09-26 16:00:36,073 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 52/100 | Train Loss: 0.0389 | Val rms_score: 0.8618
70
+ 2025-09-26 16:00:40,813 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 53/100 | Train Loss: 0.0415 | Val rms_score: 0.8826
71
+ 2025-09-26 16:00:43,906 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 54/100 | Train Loss: 0.0479 | Val rms_score: 0.8828
72
+ 2025-09-26 16:00:48,696 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 55/100 | Train Loss: 0.0788 | Val rms_score: 0.8806
73
+ 2025-09-26 16:00:53,473 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 56/100 | Train Loss: 0.0492 | Val rms_score: 0.8894
74
+ 2025-09-26 16:00:58,716 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 57/100 | Train Loss: 0.0451 | Val rms_score: 0.8723
75
+ 2025-09-26 16:01:03,317 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 58/100 | Train Loss: 0.0330 | Val rms_score: 0.8770
76
+ 2025-09-26 16:01:07,801 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 59/100 | Train Loss: 0.0340 | Val rms_score: 0.8741
77
+ 2025-09-26 16:01:12,409 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 60/100 | Train Loss: 0.0345 | Val rms_score: 0.8791
78
+ 2025-09-26 16:01:15,730 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 61/100 | Train Loss: 0.0361 | Val rms_score: 0.8738
79
+ 2025-09-26 16:01:20,981 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 62/100 | Train Loss: 0.0342 | Val rms_score: 0.8707
80
+ 2025-09-26 16:01:25,753 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 63/100 | Train Loss: 0.0306 | Val rms_score: 0.8682
81
+ 2025-09-26 16:01:30,173 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 64/100 | Train Loss: 0.0337 | Val rms_score: 0.8658
82
+ 2025-09-26 16:01:34,614 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 65/100 | Train Loss: 0.0327 | Val rms_score: 0.8635
83
+ 2025-09-26 16:01:39,433 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 66/100 | Train Loss: 0.0319 | Val rms_score: 0.8726
84
+ 2025-09-26 16:01:45,076 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 67/100 | Train Loss: 0.0336 | Val rms_score: 0.8665
85
+ 2025-09-26 16:01:50,095 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 68/100 | Train Loss: 0.0316 | Val rms_score: 0.8671
86
+ 2025-09-26 16:01:54,945 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 69/100 | Train Loss: 0.0307 | Val rms_score: 0.8703
87
+ 2025-09-26 16:01:59,869 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 70/100 | Train Loss: 0.0312 | Val rms_score: 0.8635
88
+ 2025-09-26 16:02:04,919 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 71/100 | Train Loss: 0.0309 | Val rms_score: 0.8607
89
+ 2025-09-26 16:02:10,748 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 72/100 | Train Loss: 0.0318 | Val rms_score: 0.8644
90
+ 2025-09-26 16:02:15,821 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 73/100 | Train Loss: 0.0330 | Val rms_score: 0.8610
91
+ 2025-09-26 16:02:18,781 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 74/100 | Train Loss: 0.0310 | Val rms_score: 0.8655
92
+ 2025-09-26 16:02:23,515 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 75/100 | Train Loss: 0.0321 | Val rms_score: 0.8650
93
+ 2025-09-26 16:02:28,546 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 76/100 | Train Loss: 0.0363 | Val rms_score: 0.8647
94
+ 2025-09-26 16:02:34,298 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 77/100 | Train Loss: 0.0299 | Val rms_score: 0.8628
95
+ 2025-09-26 16:02:39,039 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 78/100 | Train Loss: 0.0264 | Val rms_score: 0.8675
96
+ 2025-09-26 16:02:43,670 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 79/100 | Train Loss: 0.0292 | Val rms_score: 0.8601
97
+ 2025-09-26 16:02:47,312 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 80/100 | Train Loss: 0.0339 | Val rms_score: 0.8686
98
+ 2025-09-26 16:02:52,046 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 81/100 | Train Loss: 0.0319 | Val rms_score: 0.8663
99
+ 2025-09-26 16:02:57,290 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 82/100 | Train Loss: 0.0276 | Val rms_score: 0.8643
100
+ 2025-09-26 16:03:02,021 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 83/100 | Train Loss: 0.0253 | Val rms_score: 0.8659
101
+ 2025-09-26 16:03:06,517 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 84/100 | Train Loss: 0.0280 | Val rms_score: 0.8635
102
+ 2025-09-26 16:03:11,555 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 85/100 | Train Loss: 0.0278 | Val rms_score: 0.8663
103
+ 2025-09-26 16:03:16,357 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 86/100 | Train Loss: 0.0267 | Val rms_score: 0.8676
104
+ 2025-09-26 16:03:19,863 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 87/100 | Train Loss: 0.0243 | Val rms_score: 0.8741
105
+ 2025-09-26 16:03:24,656 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 88/100 | Train Loss: 0.0283 | Val rms_score: 0.8677
106
+ 2025-09-26 16:03:30,430 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 89/100 | Train Loss: 0.0311 | Val rms_score: 0.8650
107
+ 2025-09-26 16:03:35,898 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 90/100 | Train Loss: 0.0281 | Val rms_score: 0.8619
108
+ 2025-09-26 16:03:41,021 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 91/100 | Train Loss: 0.0283 | Val rms_score: 0.8662
109
+ 2025-09-26 16:03:46,794 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 92/100 | Train Loss: 0.0271 | Val rms_score: 0.8710
110
+ 2025-09-26 16:03:50,493 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 93/100 | Train Loss: 0.0267 | Val rms_score: 0.8620
111
+ 2025-09-26 16:03:55,206 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 94/100 | Train Loss: 0.0294 | Val rms_score: 0.8639
112
+ 2025-09-26 16:04:00,095 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 95/100 | Train Loss: 0.0269 | Val rms_score: 0.8706
113
+ 2025-09-26 16:04:05,111 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 96/100 | Train Loss: 0.0245 | Val rms_score: 0.8662
114
+ 2025-09-26 16:04:10,744 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 97/100 | Train Loss: 0.0248 | Val rms_score: 0.8613
115
+ 2025-09-26 16:04:15,390 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 98/100 | Train Loss: 0.0233 | Val rms_score: 0.8655
116
+ 2025-09-26 16:04:22,421 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 99/100 | Train Loss: 0.0238 | Val rms_score: 0.8615
117
+ 2025-09-26 16:04:27,128 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 100/100 | Train Loss: 0.0241 | Val rms_score: 0.8693
118
+ 2025-09-26 16:04:27,790 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Test rms_score: 0.9303
119
+ 2025-09-26 16:04:28,294 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Starting triplicate run 2 for dataset astrazeneca_solubility at 2025-09-26_16-04-28
120
+ 2025-09-26 16:04:32,401 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 1/100 | Train Loss: 0.8778 | Val rms_score: 0.9968
121
+ 2025-09-26 16:04:32,401 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Global step of best model: 45
122
+ 2025-09-26 16:04:35,953 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 1 with val rms_score: 0.9968
123
+ 2025-09-26 16:04:40,859 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 2/100 | Train Loss: 0.5556 | Val rms_score: 0.8894
124
+ 2025-09-26 16:04:41,061 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Global step of best model: 90
125
+ 2025-09-26 16:04:41,677 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 2 with val rms_score: 0.8894
126
+ 2025-09-26 16:04:47,354 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 3/100 | Train Loss: 0.4893 | Val rms_score: 0.9299
127
+ 2025-09-26 16:04:50,697 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 4/100 | Train Loss: 0.4028 | Val rms_score: 0.8859
128
+ 2025-09-26 16:04:50,889 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Global step of best model: 180
129
+ 2025-09-26 16:04:51,480 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 4 with val rms_score: 0.8859
130
+ 2025-09-26 16:04:56,392 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 5/100 | Train Loss: 0.3525 | Val rms_score: 0.9198
131
+ 2025-09-26 16:05:01,354 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 6/100 | Train Loss: 0.2931 | Val rms_score: 0.9250
132
+ 2025-09-26 16:05:07,166 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 7/100 | Train Loss: 0.2344 | Val rms_score: 0.8972
133
+ 2025-09-26 16:05:11,952 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 8/100 | Train Loss: 0.2347 | Val rms_score: 0.8935
134
+ 2025-09-26 16:05:16,634 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 9/100 | Train Loss: 0.2141 | Val rms_score: 0.9613
135
+ 2025-09-26 16:05:21,472 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 10/100 | Train Loss: 0.1986 | Val rms_score: 0.9052
136
+ 2025-09-26 16:05:25,134 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 11/100 | Train Loss: 0.1861 | Val rms_score: 0.9268
137
+ 2025-09-26 16:05:30,749 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 12/100 | Train Loss: 0.1773 | Val rms_score: 0.8679
138
+ 2025-09-26 16:05:30,915 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Global step of best model: 540
139
+ 2025-09-26 16:05:31,520 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 12 with val rms_score: 0.8679
140
+ 2025-09-26 16:05:37,956 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 13/100 | Train Loss: 0.1403 | Val rms_score: 0.8738
141
+ 2025-09-26 16:05:42,858 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 14/100 | Train Loss: 0.1109 | Val rms_score: 0.8865
142
+ 2025-09-26 16:05:47,942 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 15/100 | Train Loss: 0.1097 | Val rms_score: 0.8552
143
+ 2025-09-26 16:05:48,137 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Global step of best model: 675
144
+ 2025-09-26 16:05:48,778 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 15 with val rms_score: 0.8552
145
+ 2025-09-26 16:05:52,726 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 16/100 | Train Loss: 0.1031 | Val rms_score: 0.8589
146
+ 2025-09-26 16:05:59,490 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 17/100 | Train Loss: 0.1062 | Val rms_score: 0.8888
147
+ 2025-09-26 16:06:05,077 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 18/100 | Train Loss: 0.0863 | Val rms_score: 0.8905
148
+ 2025-09-26 16:06:10,134 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 19/100 | Train Loss: 0.1076 | Val rms_score: 0.8633
149
+ 2025-09-26 16:06:15,049 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 20/100 | Train Loss: 0.0993 | Val rms_score: 0.8443
150
+ 2025-09-26 16:06:15,223 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Global step of best model: 900
151
+ 2025-09-26 16:06:15,942 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 20 with val rms_score: 0.8443
152
+ 2025-09-26 16:06:21,659 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 21/100 | Train Loss: 0.1069 | Val rms_score: 0.8720
153
+ 2025-09-26 16:06:26,469 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 22/100 | Train Loss: 0.1056 | Val rms_score: 0.8739
154
+ 2025-09-26 16:06:32,638 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 23/100 | Train Loss: 0.0924 | Val rms_score: 1.0247
155
+ 2025-09-26 16:06:38,238 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 24/100 | Train Loss: 0.1535 | Val rms_score: 0.8665
156
+ 2025-09-26 16:06:43,782 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 25/100 | Train Loss: 0.0881 | Val rms_score: 0.8543
157
+ 2025-09-26 16:06:49,290 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 26/100 | Train Loss: 0.0698 | Val rms_score: 0.8559
158
+ 2025-09-26 16:06:53,698 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 27/100 | Train Loss: 0.0672 | Val rms_score: 0.8534
159
+ 2025-09-26 16:06:58,939 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 28/100 | Train Loss: 0.0597 | Val rms_score: 0.8533
160
+ 2025-09-26 16:07:04,560 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 29/100 | Train Loss: 0.0672 | Val rms_score: 0.8501
161
+ 2025-09-26 16:07:09,971 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 30/100 | Train Loss: 0.0503 | Val rms_score: 0.8558
162
+ 2025-09-26 16:07:15,813 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 31/100 | Train Loss: 0.0510 | Val rms_score: 0.8628
163
+ 2025-09-26 16:07:21,510 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 32/100 | Train Loss: 0.0605 | Val rms_score: 0.8898
164
+ 2025-09-26 16:07:25,342 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 33/100 | Train Loss: 0.0910 | Val rms_score: 0.8805
165
+ 2025-09-26 16:07:30,749 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 34/100 | Train Loss: 0.0578 | Val rms_score: 0.8528
166
+ 2025-09-26 16:07:36,581 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 35/100 | Train Loss: 0.0465 | Val rms_score: 0.8445
167
+ 2025-09-26 16:07:42,135 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 36/100 | Train Loss: 0.0410 | Val rms_score: 0.8483
168
+ 2025-09-26 16:07:47,904 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 37/100 | Train Loss: 0.0424 | Val rms_score: 0.8487
169
+ 2025-09-26 16:07:53,107 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 38/100 | Train Loss: 0.0523 | Val rms_score: 0.8433
170
+ 2025-09-26 16:07:53,276 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Global step of best model: 1710
171
+ 2025-09-26 16:07:53,940 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 38 with val rms_score: 0.8433
172
+ 2025-09-26 16:07:57,651 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 39/100 | Train Loss: 0.0500 | Val rms_score: 0.8577
173
+ 2025-09-26 16:08:02,582 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 40/100 | Train Loss: 0.0469 | Val rms_score: 0.8616
174
+ 2025-09-26 16:08:07,554 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 41/100 | Train Loss: 0.0439 | Val rms_score: 0.8642
175
+ 2025-09-26 16:08:13,348 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 42/100 | Train Loss: 0.0639 | Val rms_score: 0.8471
176
+ 2025-09-26 16:08:18,405 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 43/100 | Train Loss: 0.0433 | Val rms_score: 0.8602
177
+ 2025-09-26 16:08:23,912 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 44/100 | Train Loss: 0.0642 | Val rms_score: 0.8506
178
+ 2025-09-26 16:08:28,408 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 45/100 | Train Loss: 0.0419 | Val rms_score: 0.8685
179
+ 2025-09-26 16:08:33,269 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 46/100 | Train Loss: 0.0497 | Val rms_score: 0.8548
180
+ 2025-09-26 16:08:38,660 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 47/100 | Train Loss: 0.0500 | Val rms_score: 0.8486
181
+ 2025-09-26 16:08:43,348 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 48/100 | Train Loss: 0.0418 | Val rms_score: 0.8494
182
+ 2025-09-26 16:08:48,305 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 49/100 | Train Loss: 0.0434 | Val rms_score: 0.8471
183
+ 2025-09-26 16:08:53,024 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 50/100 | Train Loss: 0.0398 | Val rms_score: 0.8414
184
+ 2025-09-26 16:08:53,194 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Global step of best model: 2250
185
+ 2025-09-26 16:08:53,803 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 50 with val rms_score: 0.8414
186
+ 2025-09-26 16:08:57,223 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 51/100 | Train Loss: 0.0372 | Val rms_score: 0.8574
187
+ 2025-09-26 16:09:02,586 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 52/100 | Train Loss: 0.0373 | Val rms_score: 0.8646
188
+ 2025-09-26 16:09:07,168 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 53/100 | Train Loss: 0.0356 | Val rms_score: 0.8614
189
+ 2025-09-26 16:09:11,898 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 54/100 | Train Loss: 0.0417 | Val rms_score: 0.8454
190
+ 2025-09-26 16:09:16,569 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 55/100 | Train Loss: 0.0469 | Val rms_score: 0.8619
191
+ 2025-09-26 16:09:21,370 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 56/100 | Train Loss: 0.0322 | Val rms_score: 0.8508
192
+ 2025-09-26 16:09:26,625 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 57/100 | Train Loss: 0.0316 | Val rms_score: 0.8566
193
+ 2025-09-26 16:09:29,804 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 58/100 | Train Loss: 0.0340 | Val rms_score: 0.8558
194
+ 2025-09-26 16:09:34,548 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 59/100 | Train Loss: 0.0309 | Val rms_score: 0.8563
195
+ 2025-09-26 16:09:39,271 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 60/100 | Train Loss: 0.0314 | Val rms_score: 0.8413
196
+ 2025-09-26 16:09:39,435 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Global step of best model: 2700
197
+ 2025-09-26 16:09:40,066 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 60 with val rms_score: 0.8413
198
+ 2025-09-26 16:09:45,261 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 61/100 | Train Loss: 0.0297 | Val rms_score: 0.8539
199
+ 2025-09-26 16:09:50,993 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 62/100 | Train Loss: 0.0280 | Val rms_score: 0.8527
200
+ 2025-09-26 16:09:55,966 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 63/100 | Train Loss: 0.0312 | Val rms_score: 0.8656
201
+ 2025-09-26 16:09:59,267 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 64/100 | Train Loss: 0.0306 | Val rms_score: 0.8636
202
+ 2025-09-26 16:10:04,017 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 65/100 | Train Loss: 0.0291 | Val rms_score: 0.8549
203
+ 2025-09-26 16:10:08,720 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 66/100 | Train Loss: 0.0274 | Val rms_score: 0.8546
204
+ 2025-09-26 16:10:14,789 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 67/100 | Train Loss: 0.0271 | Val rms_score: 0.8563
205
+ 2025-09-26 16:10:19,817 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 68/100 | Train Loss: 0.0288 | Val rms_score: 0.8549
206
+ 2025-09-26 16:10:24,807 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 69/100 | Train Loss: 0.0273 | Val rms_score: 0.8573
207
+ 2025-09-26 16:10:29,762 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 70/100 | Train Loss: 0.0280 | Val rms_score: 0.8501
208
+ 2025-09-26 16:10:33,270 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 71/100 | Train Loss: 0.0274 | Val rms_score: 0.8483
209
+ 2025-09-26 16:10:39,270 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 72/100 | Train Loss: 0.0246 | Val rms_score: 0.8536
210
+ 2025-09-26 16:10:44,518 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 73/100 | Train Loss: 0.0267 | Val rms_score: 0.8532
211
+ 2025-09-26 16:10:49,704 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 74/100 | Train Loss: 0.0302 | Val rms_score: 0.8617
212
+ 2025-09-26 16:10:55,116 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 75/100 | Train Loss: 0.0274 | Val rms_score: 0.8647
213
+ 2025-09-26 16:11:00,230 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 76/100 | Train Loss: 0.0334 | Val rms_score: 0.8591
214
+ 2025-09-26 16:11:03,927 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 77/100 | Train Loss: 0.0269 | Val rms_score: 0.8542
215
+ 2025-09-26 16:11:09,171 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 78/100 | Train Loss: 0.0334 | Val rms_score: 0.8574
216
+ 2025-09-26 16:11:14,778 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 79/100 | Train Loss: 0.0286 | Val rms_score: 0.8556
217
+ 2025-09-26 16:11:19,891 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 80/100 | Train Loss: 0.0241 | Val rms_score: 0.8502
218
+ 2025-09-26 16:11:24,985 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 81/100 | Train Loss: 0.0280 | Val rms_score: 0.8693
219
+ 2025-09-26 16:11:30,555 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 82/100 | Train Loss: 0.0576 | Val rms_score: 0.8531
220
+ 2025-09-26 16:11:33,739 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 83/100 | Train Loss: 0.0304 | Val rms_score: 0.8564
221
+ 2025-09-26 16:11:39,338 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 84/100 | Train Loss: 0.0274 | Val rms_score: 0.8557
222
+ 2025-09-26 16:11:44,504 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 85/100 | Train Loss: 0.0236 | Val rms_score: 0.8524
223
+ 2025-09-26 16:11:49,691 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 86/100 | Train Loss: 0.0226 | Val rms_score: 0.8547
224
+ 2025-09-26 16:11:55,607 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 87/100 | Train Loss: 0.0255 | Val rms_score: 0.8573
225
+ 2025-09-26 16:12:01,001 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 88/100 | Train Loss: 0.0238 | Val rms_score: 0.8510
226
+ 2025-09-26 16:12:06,031 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 89/100 | Train Loss: 0.0283 | Val rms_score: 0.8529
227
+ 2025-09-26 16:12:11,513 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 90/100 | Train Loss: 0.0226 | Val rms_score: 0.8559
228
+ 2025-09-26 16:12:17,068 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 91/100 | Train Loss: 0.0233 | Val rms_score: 0.8497
229
+ 2025-09-26 16:12:23,027 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 92/100 | Train Loss: 0.0222 | Val rms_score: 0.8464
230
+ 2025-09-26 16:12:28,418 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 93/100 | Train Loss: 0.0203 | Val rms_score: 0.8514
231
+ 2025-09-26 16:12:32,278 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 94/100 | Train Loss: 0.0223 | Val rms_score: 0.8538
232
+ 2025-09-26 16:12:37,521 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 95/100 | Train Loss: 0.0219 | Val rms_score: 0.8515
233
+ 2025-09-26 16:12:42,812 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 96/100 | Train Loss: 0.0205 | Val rms_score: 0.8555
234
+ 2025-09-26 16:12:48,322 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 97/100 | Train Loss: 0.0217 | Val rms_score: 0.8545
235
+ 2025-09-26 16:12:53,005 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 98/100 | Train Loss: 0.0219 | Val rms_score: 0.8536
236
+ 2025-09-26 16:13:00,704 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 99/100 | Train Loss: 0.0207 | Val rms_score: 0.8566
237
+ 2025-09-26 16:13:04,919 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 100/100 | Train Loss: 0.0221 | Val rms_score: 0.8562
238
+ 2025-09-26 16:13:05,743 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Test rms_score: 0.9163
239
+ 2025-09-26 16:13:06,276 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Starting triplicate run 3 for dataset astrazeneca_solubility at 2025-09-26_16-13-06
240
+ 2025-09-26 16:13:10,984 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 1/100 | Train Loss: 0.9056 | Val rms_score: 0.9642
241
+ 2025-09-26 16:13:10,985 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Global step of best model: 45
242
+ 2025-09-26 16:13:13,752 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 1 with val rms_score: 0.9642
243
+ 2025-09-26 16:13:19,201 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 2/100 | Train Loss: 0.5639 | Val rms_score: 0.9080
244
+ 2025-09-26 16:13:19,393 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Global step of best model: 90
245
+ 2025-09-26 16:13:19,964 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 2 with val rms_score: 0.9080
246
+ 2025-09-26 16:13:26,545 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 3/100 | Train Loss: 0.4821 | Val rms_score: 0.9294
247
+ 2025-09-26 16:13:31,869 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 4/100 | Train Loss: 0.4306 | Val rms_score: 0.8921
248
+ 2025-09-26 16:13:32,071 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Global step of best model: 180
249
+ 2025-09-26 16:13:32,694 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 4 with val rms_score: 0.8921
250
+ 2025-09-26 16:13:37,584 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 5/100 | Train Loss: 0.3925 | Val rms_score: 0.8898
251
+ 2025-09-26 16:13:37,785 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Global step of best model: 225
252
+ 2025-09-26 16:13:38,648 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 5 with val rms_score: 0.8898
253
+ 2025-09-26 16:13:44,178 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 6/100 | Train Loss: 0.3750 | Val rms_score: 0.9157
254
+ 2025-09-26 16:13:49,486 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 7/100 | Train Loss: 0.3021 | Val rms_score: 0.9280
255
+ 2025-09-26 16:13:54,745 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 8/100 | Train Loss: 0.2514 | Val rms_score: 0.8809
256
+ 2025-09-26 16:13:54,953 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Global step of best model: 360
257
+ 2025-09-26 16:13:55,798 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 8 with val rms_score: 0.8809
258
+ 2025-09-26 16:14:01,079 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 9/100 | Train Loss: 0.2531 | Val rms_score: 0.8715
259
+ 2025-09-26 16:14:01,276 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Global step of best model: 405
260
+ 2025-09-26 16:14:01,874 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 9 with val rms_score: 0.8715
261
+ 2025-09-26 16:14:05,549 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 10/100 | Train Loss: 0.2389 | Val rms_score: 0.8723
262
+ 2025-09-26 16:14:10,870 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 11/100 | Train Loss: 0.1875 | Val rms_score: 0.8991
263
+ 2025-09-26 16:14:16,712 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 12/100 | Train Loss: 0.1789 | Val rms_score: 0.8974
264
+ 2025-09-26 16:14:21,877 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 13/100 | Train Loss: 0.1722 | Val rms_score: 0.8684
265
+ 2025-09-26 16:14:22,084 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Global step of best model: 585
266
+ 2025-09-26 16:14:22,753 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 13 with val rms_score: 0.8684
267
+ 2025-09-26 16:14:28,504 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 14/100 | Train Loss: 0.1458 | Val rms_score: 0.8664
268
+ 2025-09-26 16:14:28,716 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Global step of best model: 630
269
+ 2025-09-26 16:14:29,307 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 14 with val rms_score: 0.8664
270
+ 2025-09-26 16:14:34,381 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 15/100 | Train Loss: 0.1410 | Val rms_score: 0.8750
271
+ 2025-09-26 16:14:38,179 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 16/100 | Train Loss: 0.1414 | Val rms_score: 0.8742
272
+ 2025-09-26 16:14:44,059 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 17/100 | Train Loss: 0.1375 | Val rms_score: 0.8772
273
+ 2025-09-26 16:14:49,733 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 18/100 | Train Loss: 0.1359 | Val rms_score: 0.8971
274
+ 2025-09-26 16:14:54,848 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 19/100 | Train Loss: 0.1264 | Val rms_score: 0.8641
275
+ 2025-09-26 16:14:55,041 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Global step of best model: 855
276
+ 2025-09-26 16:14:55,645 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 19 with val rms_score: 0.8641
277
+ 2025-09-26 16:15:01,393 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 20/100 | Train Loss: 0.0965 | Val rms_score: 0.8633
278
+ 2025-09-26 16:15:01,593 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Global step of best model: 900
279
+ 2025-09-26 16:15:02,204 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 20 with val rms_score: 0.8633
280
+ 2025-09-26 16:15:06,125 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 21/100 | Train Loss: 0.0924 | Val rms_score: 0.8633
281
+ 2025-09-26 16:15:06,833 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Global step of best model: 945
282
+ 2025-09-26 16:15:07,442 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 21 with val rms_score: 0.8633
283
+ 2025-09-26 16:15:12,554 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 22/100 | Train Loss: 0.0896 | Val rms_score: 0.8874
284
+ 2025-09-26 16:15:18,611 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 23/100 | Train Loss: 0.0862 | Val rms_score: 0.8763
285
+ 2025-09-26 16:15:23,603 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 24/100 | Train Loss: 0.1042 | Val rms_score: 0.8659
286
+ 2025-09-26 16:15:28,750 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 25/100 | Train Loss: 0.0856 | Val rms_score: 0.8570
287
+ 2025-09-26 16:15:28,949 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Global step of best model: 1125
288
+ 2025-09-26 16:15:29,559 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 25 with val rms_score: 0.8570
289
+ 2025-09-26 16:15:35,011 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 26/100 | Train Loss: 0.0896 | Val rms_score: 0.8444
290
+ 2025-09-26 16:15:35,732 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Global step of best model: 1170
291
+ 2025-09-26 16:15:36,306 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 26 with val rms_score: 0.8444
292
+ 2025-09-26 16:15:40,378 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 27/100 | Train Loss: 0.0740 | Val rms_score: 0.8696
293
+ 2025-09-26 16:15:45,049 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 28/100 | Train Loss: 0.0764 | Val rms_score: 0.8760
294
+ 2025-09-26 16:15:49,812 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 29/100 | Train Loss: 0.1281 | Val rms_score: 0.8640
295
+ 2025-09-26 16:15:54,942 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 30/100 | Train Loss: 0.1451 | Val rms_score: 0.8557
296
+ 2025-09-26 16:15:59,941 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 31/100 | Train Loss: 0.1069 | Val rms_score: 0.8585
297
+ 2025-09-26 16:16:05,021 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 32/100 | Train Loss: 0.0648 | Val rms_score: 0.8497
298
+ 2025-09-26 16:16:08,295 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 33/100 | Train Loss: 0.0583 | Val rms_score: 0.8609
299
+ 2025-09-26 16:16:13,080 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 34/100 | Train Loss: 0.0607 | Val rms_score: 0.8647
300
+ 2025-09-26 16:16:17,809 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 35/100 | Train Loss: 0.0552 | Val rms_score: 0.8670
301
+ 2025-09-26 16:16:22,628 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 36/100 | Train Loss: 0.0563 | Val rms_score: 0.8560
302
+ 2025-09-26 16:16:27,918 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 37/100 | Train Loss: 0.0608 | Val rms_score: 0.8541
303
+ 2025-09-26 16:16:32,671 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 38/100 | Train Loss: 0.0430 | Val rms_score: 0.8502
304
+ 2025-09-26 16:16:37,441 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 39/100 | Train Loss: 0.0479 | Val rms_score: 0.8583
305
+ 2025-09-26 16:16:40,794 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 40/100 | Train Loss: 0.0469 | Val rms_score: 0.8466
306
+ 2025-09-26 16:16:45,588 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 41/100 | Train Loss: 0.0439 | Val rms_score: 0.8485
307
+ 2025-09-26 16:16:50,624 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 42/100 | Train Loss: 0.0441 | Val rms_score: 0.8549
308
+ 2025-09-26 16:16:55,409 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 43/100 | Train Loss: 0.0462 | Val rms_score: 0.8586
309
+ 2025-09-26 16:17:00,374 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 44/100 | Train Loss: 0.0469 | Val rms_score: 0.8439
310
+ 2025-09-26 16:17:00,535 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Global step of best model: 1980
311
+ 2025-09-26 16:17:01,183 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 44 with val rms_score: 0.8439
312
+ 2025-09-26 16:17:07,369 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 45/100 | Train Loss: 0.0413 | Val rms_score: 0.8452
313
+ 2025-09-26 16:17:11,293 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 46/100 | Train Loss: 0.0389 | Val rms_score: 0.8483
314
+ 2025-09-26 16:17:17,493 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 47/100 | Train Loss: 0.0391 | Val rms_score: 0.8546
315
+ 2025-09-26 16:17:23,117 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 48/100 | Train Loss: 0.0389 | Val rms_score: 0.8573
316
+ 2025-09-26 16:17:28,656 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 49/100 | Train Loss: 0.0492 | Val rms_score: 0.8431
317
+ 2025-09-26 16:17:28,823 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Global step of best model: 2205
318
+ 2025-09-26 16:17:29,426 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 49 with val rms_score: 0.8431
319
+ 2025-09-26 16:17:34,818 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 50/100 | Train Loss: 0.0514 | Val rms_score: 0.8569
320
+ 2025-09-26 16:17:40,316 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 51/100 | Train Loss: 0.0425 | Val rms_score: 0.8525
321
+ 2025-09-26 16:17:44,806 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 52/100 | Train Loss: 0.0391 | Val rms_score: 0.8458
322
+ 2025-09-26 16:17:49,881 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 53/100 | Train Loss: 0.0365 | Val rms_score: 0.8520
323
+ 2025-09-26 16:17:55,263 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 54/100 | Train Loss: 0.0424 | Val rms_score: 0.8452
324
+ 2025-09-26 16:18:00,902 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 55/100 | Train Loss: 0.0398 | Val rms_score: 0.8634
325
+ 2025-09-26 16:18:06,609 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 56/100 | Train Loss: 0.0402 | Val rms_score: 0.8550
326
+ 2025-09-26 16:18:11,290 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 57/100 | Train Loss: 0.0382 | Val rms_score: 0.8577
327
+ 2025-09-26 16:18:16,955 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 58/100 | Train Loss: 0.0297 | Val rms_score: 0.8494
328
+ 2025-09-26 16:18:22,546 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 59/100 | Train Loss: 0.0378 | Val rms_score: 0.8567
329
+ 2025-09-26 16:18:28,143 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 60/100 | Train Loss: 0.0378 | Val rms_score: 0.8497
330
+ 2025-09-26 16:18:33,823 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 61/100 | Train Loss: 0.0356 | Val rms_score: 0.8440
331
+ 2025-09-26 16:18:39,894 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 62/100 | Train Loss: 0.0351 | Val rms_score: 0.8515
332
+ 2025-09-26 16:18:43,799 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 63/100 | Train Loss: 0.0353 | Val rms_score: 0.8426
333
+ 2025-09-26 16:18:43,965 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Global step of best model: 2835
334
+ 2025-09-26 16:18:44,634 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 63 with val rms_score: 0.8426
335
+ 2025-09-26 16:18:50,445 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 64/100 | Train Loss: 0.0352 | Val rms_score: 0.8438
336
+ 2025-09-26 16:18:55,814 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 65/100 | Train Loss: 0.0328 | Val rms_score: 0.8470
337
+ 2025-09-26 16:19:01,113 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 66/100 | Train Loss: 0.0325 | Val rms_score: 0.8437
338
+ 2025-09-26 16:19:07,479 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 67/100 | Train Loss: 0.0292 | Val rms_score: 0.8515
339
+ 2025-09-26 16:19:12,698 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 68/100 | Train Loss: 0.0297 | Val rms_score: 0.8475
340
+ 2025-09-26 16:19:17,562 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 69/100 | Train Loss: 0.0273 | Val rms_score: 0.8467
341
+ 2025-09-26 16:19:25,076 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 70/100 | Train Loss: 0.0295 | Val rms_score: 0.8429
342
+ 2025-09-26 16:19:31,758 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 71/100 | Train Loss: 0.0300 | Val rms_score: 0.8501
343
+ 2025-09-26 16:19:38,439 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 72/100 | Train Loss: 0.0307 | Val rms_score: 0.8492
344
+ 2025-09-26 16:19:44,077 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 73/100 | Train Loss: 0.0314 | Val rms_score: 0.8460
345
+ 2025-09-26 16:19:48,370 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 74/100 | Train Loss: 0.0301 | Val rms_score: 0.8508
346
+ 2025-09-26 16:19:56,927 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 75/100 | Train Loss: 0.0295 | Val rms_score: 0.8486
347
+ 2025-09-26 16:20:03,526 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 76/100 | Train Loss: 0.0277 | Val rms_score: 0.8511
348
+ 2025-09-26 16:20:09,879 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 77/100 | Train Loss: 0.0286 | Val rms_score: 0.8481
349
+ 2025-09-26 16:20:13,854 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 78/100 | Train Loss: 0.0314 | Val rms_score: 0.8546
350
+ 2025-09-26 16:20:19,291 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 79/100 | Train Loss: 0.0306 | Val rms_score: 0.8452
351
+ 2025-09-26 16:20:24,900 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 80/100 | Train Loss: 0.0278 | Val rms_score: 0.8457
352
+ 2025-09-26 16:20:30,422 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 81/100 | Train Loss: 0.0335 | Val rms_score: 0.8739
353
+ 2025-09-26 16:20:36,750 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 82/100 | Train Loss: 0.0448 | Val rms_score: 0.8476
354
+ 2025-09-26 16:20:42,425 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 83/100 | Train Loss: 0.0312 | Val rms_score: 0.8492
355
+ 2025-09-26 16:20:47,082 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 84/100 | Train Loss: 0.0280 | Val rms_score: 0.8542
356
+ 2025-09-26 16:20:52,149 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 85/100 | Train Loss: 0.0291 | Val rms_score: 0.8445
357
+ 2025-09-26 16:20:57,228 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 86/100 | Train Loss: 0.0312 | Val rms_score: 0.8442
358
+ 2025-09-26 16:21:02,755 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 87/100 | Train Loss: 0.0283 | Val rms_score: 0.8420
359
+ 2025-09-26 16:21:02,918 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Global step of best model: 3915
360
+ 2025-09-26 16:21:03,530 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 87 with val rms_score: 0.8420
361
+ 2025-09-26 16:21:08,643 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 88/100 | Train Loss: 0.0267 | Val rms_score: 0.8407
362
+ 2025-09-26 16:21:08,844 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Global step of best model: 3960
363
+ 2025-09-26 16:21:09,484 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 88 with val rms_score: 0.8407
364
+ 2025-09-26 16:21:15,454 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 89/100 | Train Loss: 0.0247 | Val rms_score: 0.8493
365
+ 2025-09-26 16:21:21,603 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 90/100 | Train Loss: 0.0262 | Val rms_score: 0.8420
366
+ 2025-09-26 16:21:27,709 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 91/100 | Train Loss: 0.0269 | Val rms_score: 0.8513
367
+ 2025-09-26 16:21:34,490 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 92/100 | Train Loss: 0.0252 | Val rms_score: 0.8518
368
+ 2025-09-26 16:21:40,482 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 93/100 | Train Loss: 0.0247 | Val rms_score: 0.8436
369
+ 2025-09-26 16:21:46,569 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 94/100 | Train Loss: 0.0255 | Val rms_score: 0.8461
370
+ 2025-09-26 16:21:51,379 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 95/100 | Train Loss: 0.0309 | Val rms_score: 0.8530
371
+ 2025-09-26 16:21:56,515 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 96/100 | Train Loss: 0.0359 | Val rms_score: 0.8482
372
+ 2025-09-26 16:22:02,598 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 97/100 | Train Loss: 0.0306 | Val rms_score: 0.8355
373
+ 2025-09-26 16:22:02,779 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Global step of best model: 4365
374
+ 2025-09-26 16:22:03,594 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 97 with val rms_score: 0.8355
375
+ 2025-09-26 16:22:09,142 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 98/100 | Train Loss: 0.0250 | Val rms_score: 0.8486
376
+ 2025-09-26 16:22:13,586 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 99/100 | Train Loss: 0.0297 | Val rms_score: 0.8571
377
+ 2025-09-26 16:22:16,650 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 100/100 | Train Loss: 0.0432 | Val rms_score: 0.8509
378
+ 2025-09-26 16:22:17,128 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Test rms_score: 0.9374
379
+ 2025-09-26 16:22:17,697 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Final Triplicate Test Results — Avg rms_score: 0.9280, Std Dev: 0.0088