Baby-Llama-58M-RUN3_5

This model is a fine-tuned version of on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 5.2656

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.00025
  • train_batch_size: 32
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_steps: 50
  • num_epochs: 20
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss
287.9659 1.0 12 256.0041
230.7873 2.0 24 212.6014
207.1002 3.0 36 180.9384
121.5561 4.0 48 107.3193
81.2108 5.0 60 71.6529
45.9781 6.0 72 40.4501
24.5986 7.0 84 22.4212
15.2205 8.0 96 13.7469
10.1247 9.0 108 9.8119
7.975 10.0 120 7.8583
6.7087 11.0 132 7.0360
6.1988 12.0 144 6.4104
5.6752 13.0 156 6.1222
5.5155 14.0 168 5.8179
4.7754 15.0 180 5.5676
4.816 16.0 192 5.4583
4.817 17.0 204 5.3641
4.6966 18.0 216 5.3147
4.8322 19.0 228 5.2867
4.4875 20.0 240 5.2656

Framework versions

  • Transformers 4.39.1
  • Pytorch 2.1.2+cu121
  • Datasets 2.16.1
  • Tokenizers 0.15.0
Downloads last month
35
Safetensors
Model size
46.5M params
Tensor type
F32
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.