Chatbot-RACCIS
Collection
2 items
•
Updated
This model is a fine-tuned version of mistralai/Ministral-8B-Instruct-2410 on an unknown dataset. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
2.2547 | 1.7336 | 100 | 2.3270 |
2.1866 | 3.4541 | 200 | 2.2232 |
2.0063 | 5.1747 | 300 | 2.1858 |
1.9367 | 6.9083 | 400 | 2.1675 |
1.88 | 8.6288 | 500 | 2.1830 |
Base model
mistralai/Ministral-8B-Instruct-2410