A10-Llasa-1B_220K

This model is a fine-tuned version of HKUSTAudio/Llasa-1B on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 7.3216

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 3
  • eval_batch_size: 3
  • seed: 42
  • distributed_type: multi-GPU
  • num_devices: 2
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 24
  • total_eval_batch_size: 6
  • optimizer: Adam with betas=(0.9,0.95) and epsilon=1e-08
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.03
  • num_epochs: 1

Training results

Training Loss Epoch Step Validation Loss
No log 0 0 7.8117
7.3694 0.0089 1000 7.5275
7.3694 0.0089 1000 7.5158
7.0835 0.0715 2000 7.4015
7.0793 0.1072 3000 7.3785
7.0461 0.1430 4000 7.3690
7.0339 0.1787 5000 7.3580
6.9696 0.2144 6000 7.3513
7.033 0.2502 7000 7.3444
6.9768 0.2859 8000 7.3387
7.1218 0.3216 9000 7.3378
7.041 0.3574 10000 7.3314
6.9799 0.3931 11000 7.3350
7.0261 0.4289 12000 7.3297
6.888 0.4646 13000 7.3288
6.9483 0.5003 14000 7.3285
6.989 0.5361 15000 7.3269
7.0167 0.5718 16000 7.3243
6.9611 0.6076 17000 7.3273
6.9077 0.6433 18000 7.3256
7.0845 0.6790 19000 7.3235
6.8593 0.7148 20000 7.3207
6.8621 0.7505 21000 7.3216
7.1707 0.7863 22000 7.3225
6.9153 0.8220 23000 7.3209
6.9139 0.8577 24000 7.3217
6.9 0.8935 25000 7.3214
6.7397 0.9292 26000 7.3207
6.9967 0.9649 27000 7.3216

Framework versions

  • PEFT 0.15.2
  • Transformers 4.45.2
  • Pytorch 2.4.1+cu121
  • Datasets 3.0.1
  • Tokenizers 0.20.1
Downloads last month
4
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for zhcheng/A10-Llasa-1B_220K

Adapter
(2)
this model