gemma2b-summarize-gpt4o
Collection
9 items
•
Updated
This model is a fine-tuned version of google/gemma-2b on the llama-duo/synth_summarize_dataset_dedup dataset. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
1.3438 | 1.0 | 73 | 2.5419 |
1.1767 | 2.0 | 146 | 2.4979 |
1.1163 | 3.0 | 219 | 2.4967 |
1.0605 | 4.0 | 292 | 2.5121 |
1.0362 | 5.0 | 365 | 2.5401 |
1.0052 | 6.0 | 438 | 2.5711 |
0.9813 | 7.0 | 511 | 2.5912 |
0.9593 | 8.0 | 584 | 2.6132 |
0.953 | 9.0 | 657 | 2.6180 |
0.9482 | 10.0 | 730 | 2.6191 |
Base model
google/gemma-2b