Edit model card

chessgpt2-small-l

This model is a fine-tuned version of gpt2 on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.8139

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0004
  • train_batch_size: 32
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 64
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.04
  • num_epochs: 3
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss
2.0248 0.1280 2000 1.4713
1.3868 0.2560 4000 1.2433
1.2335 0.3839 6000 1.1391
1.1517 0.5119 8000 1.0795
1.0989 0.6399 10000 1.0348
1.0585 0.7679 12000 1.0035
1.0273 0.8958 14000 0.9743
0.9978 1.0238 16000 0.9511
0.9687 1.1518 18000 0.9305
0.9517 1.2798 20000 0.9125
0.9353 1.4077 22000 0.8987
0.9204 1.5357 24000 0.8827
0.9077 1.6637 26000 0.8713
0.8942 1.7917 28000 0.8585
0.8823 1.9196 30000 0.8479
0.8656 2.0476 32000 0.8402
0.8448 2.1756 34000 0.8336
0.8393 2.3036 36000 0.8270
0.8341 2.4315 38000 0.8221
0.8294 2.5595 40000 0.8185
0.8269 2.6875 42000 0.8158
0.8241 2.8155 44000 0.8144
0.8242 2.9434 46000 0.8139

Framework versions

  • Transformers 4.44.2
  • Pytorch 2.4.1+cu121
  • Datasets 3.0.1
  • Tokenizers 0.19.1
Downloads last month
11
Safetensors
Model size
85.6M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for dakwi/chessgpt2-small-l

Finetuned
(1139)
this model