AdaDecode
Collection
9 items
โข
Updated
This model is a fine-tuned version of meta-llama/CodeLlama-13b-Instruct-hf on the meng-lab/CodeLlama-13B-Instruct-gsm8k dataset. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
Training Loss | Epoch | Step | Validation Loss | Loss Layer 5 Head | Loss Layer 10 Head | Loss Layer 15 Head | Loss Layer 20 Head | Loss Layer 25 Head | Loss Layer 30 Head | Loss Layer 35 Head |
---|---|---|---|---|---|---|---|---|---|---|
3.5888 | 26.0163 | 200 | 4.9539 | 1.5721 | 1.0672 | 1.1373 | 0.7569 | 0.2971 | 0.1321 | 0.2111 |
2.2226 | 52.0325 | 400 | 4.1476 | 1.4725 | 0.9947 | 0.9848 | 0.4952 | 0.1877 | 0.1073 | 0.1141 |
1.9091 | 78.0488 | 600 | 4.0229 | 1.4382 | 0.9813 | 0.9315 | 0.4901 | 0.1839 | 0.1044 | 0.1004 |
Base model
meta-llama/CodeLlama-13b-Instruct-hf