llamafactory Writer agents
Collection
finetuned with LeoYML/FoamGPT as the input writer agent for foam agent
•
6 items
•
Updated
This model is a fine-tuned version of meta-llama/Llama-3.1-8B-Instruct on the train dataset. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
Training Loss | Epoch | Step | Validation Loss | Accuracy |
---|---|---|---|---|
0.3833 | 1.0 | 42 | 0.3712 | 0.9116 |
0.298 | 2.0 | 84 | 0.2805 | 0.9280 |
0.2038 | 3.0 | 126 | 0.2475 | 0.9400 |
0.1427 | 4.0 | 168 | 0.2243 | 0.9458 |
0.1081 | 5.0 | 210 | 0.2245 | 0.9490 |
0.066 | 6.0 | 252 | 0.2289 | 0.9516 |
0.0503 | 7.0 | 294 | 0.2457 | 0.9523 |
0.0401 | 8.0 | 336 | 0.2616 | 0.9527 |
0.0338 | 8.7904 | 369 | 0.2624 | 0.9526 |
Base model
meta-llama/Llama-3.1-8B