Llama3.1-8b-instruct-SFT-2024-09-18_LoRAs
This model is a fine-tuned version of meta-llama/Meta-Llama-3.1-8B-Instruct on the generator dataset. It achieves the following results on the evaluation set:
- Loss: 0.9931
- Accuracy: 0.0009
- Bleu: 0.5303
- Rouge1: 0.7979
- Rouge2: 0.5322
- Rougel: 0.6881
- Rougelsum: 0.7836
- Perplexity: 2.6995
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 6
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 1.5
Training results
Training Loss | Epoch | Step | Validation Loss | Accuracy | Bleu | Rouge1 | Rouge2 | Rougel | Rougelsum | Perplexity |
---|---|---|---|---|---|---|---|---|---|---|
1.8865 | 0.0812 | 100 | 1.4597 | 0.0010 | 0.4381 | 0.7484 | 0.4472 | 0.6115 | 0.7309 | 4.3047 |
1.556 | 0.1625 | 200 | 1.3056 | 0.0007 | 0.4613 | 0.7654 | 0.4686 | 0.6322 | 0.7469 | 3.6898 |
1.4258 | 0.2437 | 300 | 1.2155 | 0.0007 | 0.4815 | 0.7745 | 0.4848 | 0.6475 | 0.7567 | 3.3720 |
1.3455 | 0.3249 | 400 | 1.1721 | 0.0007 | 0.4877 | 0.7756 | 0.4906 | 0.6551 | 0.7588 | 3.2287 |
1.2937 | 0.4062 | 500 | 1.1408 | 0.0009 | 0.4944 | 0.7808 | 0.4958 | 0.6591 | 0.7640 | 3.1291 |
1.2661 | 0.4874 | 600 | 1.1149 | 0.0008 | 0.5032 | 0.7854 | 0.5060 | 0.6659 | 0.7694 | 3.0491 |
1.2571 | 0.5686 | 700 | 1.0901 | 0.0009 | 0.5081 | 0.7897 | 0.5105 | 0.6707 | 0.7749 | 2.9745 |
1.2529 | 0.6499 | 800 | 1.0758 | 0.0009 | 0.5130 | 0.7888 | 0.5151 | 0.6739 | 0.7720 | 2.9322 |
1.2122 | 0.7311 | 900 | 1.0639 | 0.0010 | 0.5133 | 0.7895 | 0.5158 | 0.6737 | 0.7737 | 2.8974 |
1.2081 | 0.8123 | 1000 | 1.0521 | 0.0009 | 0.5144 | 0.7902 | 0.5148 | 0.6755 | 0.7743 | 2.8636 |
1.1804 | 0.8936 | 1100 | 1.0411 | 0.0009 | 0.5175 | 0.7920 | 0.5192 | 0.6789 | 0.7770 | 2.8321 |
1.1616 | 0.9748 | 1200 | 1.0311 | 0.0009 | 0.5209 | 0.7924 | 0.5205 | 0.6794 | 0.7764 | 2.8041 |
1.1607 | 1.0561 | 1300 | 1.0244 | 0.0010 | 0.5215 | 0.7935 | 0.5243 | 0.6812 | 0.7787 | 2.7855 |
1.1554 | 1.1373 | 1400 | 1.0168 | 0.0010 | 0.5241 | 0.7953 | 0.5258 | 0.6830 | 0.7800 | 2.7642 |
1.153 | 1.2185 | 1500 | 1.0103 | 0.0009 | 0.5263 | 0.7957 | 0.5263 | 0.6841 | 0.7812 | 2.7464 |
1.1488 | 1.2998 | 1600 | 1.0032 | 0.0009 | 0.5250 | 0.7969 | 0.5284 | 0.6842 | 0.7815 | 2.7268 |
1.1488 | 1.3810 | 1700 | 0.9979 | 0.0010 | 0.5280 | 0.7979 | 0.5281 | 0.6864 | 0.7819 | 2.7126 |
1.1529 | 1.4622 | 1800 | 0.9931 | 0.0009 | 0.5303 | 0.7979 | 0.5322 | 0.6881 | 0.7836 | 2.6995 |
Framework versions
- PEFT 0.12.0
- Transformers 4.44.2
- Pytorch 2.0.1+cu118
- Datasets 3.0.0
- Tokenizers 0.19.1
- Downloads last month
- 1
Model tree for ccibeekeoc42/Llama3.1-8b-instruct-SFT-2024-09-18_LoRAs
Base model
meta-llama/Llama-3.1-8B
Finetuned
meta-llama/Llama-3.1-8B-Instruct