Visualize in Weights & Biases

aoi_clip_high_resolution_crossAttenttionFusion_fusin_gpt_new_sampler

This model is a fine-tuned version of OFA-Sys/chinese-clip-vit-base-patch16 on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 5.3131
  • Accuracy: 0.0559

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 25
  • eval_batch_size: 20
  • seed: 42
  • gradient_accumulation_steps: 8
  • total_train_batch_size: 200
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 200.0
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Accuracy
2.152 19.9759 6220 3.0212 0.0537
1.9424 39.9518 12440 3.4495 0.0563
1.7646 59.9277 18660 3.9469 0.0561
1.6901 79.9037 24880 4.2354 0.0550
1.6562 99.8796 31100 4.6732 0.0548
1.6398 119.8555 37320 4.8612 0.0550
1.6289 139.8314 43540 4.8784 0.0550
1.6192 159.8073 49760 5.2516 0.0554
1.6163 179.7832 55980 5.2837 0.0558
1.6165 199.7591 62200 5.3131 0.0558

Framework versions

  • Transformers 4.42.3
  • Pytorch 2.3.1+cu121
  • Datasets 2.20.0
  • Tokenizers 0.19.1
Downloads last month
16
Safetensors
Model size
291M params
Tensor type
F32
·
Inference API
Unable to determine this model's library. Check the docs .

Model tree for sharkMeow/aoi_clip_high_resolution_crossAttenttionFusion_fusin_gpt_new_sampler

Finetuned
(34)
this model