gemma2b-summarize-gpt4o
Collection
9 items
•
Updated
This model is a fine-tuned version of google/gemma-2b on the llama-duo/synth_summarize_dataset_dedup dataset. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
1.1174 | 0.9974 | 292 | 2.4482 |
1.0252 | 1.9983 | 585 | 2.4514 |
0.988 | 2.9991 | 878 | 2.4683 |
0.9741 | 4.0 | 1171 | 2.5000 |
0.9342 | 4.9974 | 1463 | 2.5203 |
0.9201 | 5.9983 | 1756 | 2.5519 |
0.9054 | 6.9991 | 2049 | 2.5763 |
0.8902 | 8.0 | 2342 | 2.5922 |
0.8818 | 8.9974 | 2634 | 2.5982 |
0.8852 | 9.9744 | 2920 | 2.5990 |
Base model
google/gemma-2b