Visualize in Weights & Biases

CodeLlama-13b-Instruct-sft-5e-3-epoch-100-gsm8k

This model is a fine-tuned version of meta-llama/CodeLlama-13b-Instruct-hf on the meng-lab/CodeLlama-13B-Instruct-gsm8k dataset. It achieves the following results on the evaluation set:

  • Loss: 4.0229
  • Loss Layer 5 Head: 1.4382
  • Loss Layer 10 Head: 0.9813
  • Loss Layer 15 Head: 0.9315
  • Loss Layer 20 Head: 0.4901
  • Loss Layer 25 Head: 0.1839
  • Loss Layer 30 Head: 0.1044
  • Loss Layer 35 Head: 0.1004

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.005
  • train_batch_size: 1
  • eval_batch_size: 2
  • seed: 42
  • distributed_type: multi-GPU
  • num_devices: 8
  • gradient_accumulation_steps: 16
  • total_train_batch_size: 128
  • total_eval_batch_size: 16
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 100

Training results

Training Loss Epoch Step Validation Loss Loss Layer 5 Head Loss Layer 10 Head Loss Layer 15 Head Loss Layer 20 Head Loss Layer 25 Head Loss Layer 30 Head Loss Layer 35 Head
3.5888 26.0163 200 4.9539 1.5721 1.0672 1.1373 0.7569 0.2971 0.1321 0.2111
2.2226 52.0325 400 4.1476 1.4725 0.9947 0.9848 0.4952 0.1877 0.1073 0.1141
1.9091 78.0488 600 4.0229 1.4382 0.9813 0.9315 0.4901 0.1839 0.1044 0.1004

Framework versions

  • Transformers 4.43.2
  • Pytorch 2.5.1+cu124
  • Datasets 3.2.0
  • Tokenizers 0.19.1
Downloads last month
1
Safetensors
Model size
13.2B params
Tensor type
BF16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for meng-lab/codellama_13b_instruct_paradec_gsm8k

Finetuned
(10)
this model

Collection including meng-lab/codellama_13b_instruct_paradec_gsm8k