gemma2b-summarize-gpt4o
Collection
9 items
•
Updated
This model is a fine-tuned version of google/gemma-2b on the llama-duo/synth_summarize_dataset_dedup dataset. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
2.5077 | 0.9730 | 18 | 2.7787 |
1.6701 | 2.0 | 37 | 2.6000 |
1.3757 | 2.9730 | 55 | 2.5216 |
1.2905 | 4.0 | 74 | 2.5137 |
1.2291 | 4.9730 | 92 | 2.5113 |
1.1946 | 6.0 | 111 | 2.5235 |
1.1618 | 6.9730 | 129 | 2.5300 |
1.1521 | 8.0 | 148 | 2.5335 |
1.147 | 8.9730 | 166 | 2.5343 |
1.14 | 9.7297 | 180 | 2.5343 |
Base model
google/gemma-2b