videomae-base-finetuned-ucf101-subset

This model is a fine-tuned version of MCG-NJU/videomae-base on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.2254
  • Accuracy: 0.9180

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • training_steps: 325

Training results

Training Loss Epoch Step Validation Loss Accuracy
No log 0.0123 4 0.6042 0.8689
No log 1.0123 8 0.5912 0.8852
0.1557 2.0123 12 0.5992 0.8852
0.1557 3.0123 16 0.5476 0.8852
0.1353 4.0123 20 0.5838 0.8689
0.1353 5.0123 24 0.5512 0.8689
0.1353 6.0123 28 0.5185 0.8689
0.1002 7.0123 32 0.4810 0.9016
0.1002 8.0123 36 0.3758 0.9016
0.0612 9.0123 40 0.4661 0.8852
0.0612 10.0123 44 0.4693 0.8525
0.0612 11.0123 48 0.3250 0.8689
0.0395 12.0123 52 0.3807 0.8852
0.0395 13.0123 56 0.3268 0.9016
0.0294 14.0123 60 0.3572 0.8689
0.0294 15.0123 64 0.4752 0.8852
0.0294 16.0123 68 0.3392 0.8852
0.0232 17.0123 72 0.5777 0.8689
0.0232 18.0123 76 0.3572 0.9016
0.0168 19.0123 80 0.6077 0.8361
0.0168 20.0123 84 0.3536 0.9016
0.0168 21.0123 88 0.4749 0.8852
0.0141 22.0123 92 0.3578 0.9016
0.0141 23.0123 96 0.3149 0.8852
0.0113 24.0123 100 0.3504 0.8852
0.0113 25.0123 104 0.2214 0.9180
0.0113 26.0123 108 0.2442 0.9180
0.0143 27.0123 112 0.4498 0.8852
0.0143 28.0123 116 0.3842 0.8689
0.0104 29.0123 120 0.3404 0.8689
0.0104 30.0123 124 0.2934 0.9016
0.0104 31.0123 128 0.2977 0.9180
0.009 32.0123 132 0.3295 0.8852
0.009 33.0123 136 0.3387 0.8689
0.0074 34.0123 140 0.3298 0.8852
0.0074 35.0123 144 0.3200 0.8852
0.0074 36.0123 148 0.3098 0.9016
0.0069 37.0123 152 0.3426 0.8852
0.0069 38.0123 156 0.3258 0.9016
0.0063 39.0123 160 0.3417 0.8852
0.0063 40.0123 164 0.3744 0.8852
0.0063 41.0123 168 0.3680 0.8852
0.0058 42.0123 172 0.3940 0.8689
0.0058 43.0123 176 0.3287 0.8852
0.0163 44.0123 180 0.2990 0.8852
0.0163 45.0123 184 0.3104 0.8852
0.0163 46.0123 188 0.3738 0.8689
0.0059 47.0123 192 0.3384 0.8852
0.0059 48.0123 196 0.3533 0.8852
0.0058 49.0123 200 0.3802 0.9016
0.0058 50.0123 204 0.4168 0.9016
0.0058 51.0123 208 0.3361 0.8852
0.0053 52.0123 212 0.3255 0.8852
0.0053 53.0123 216 0.3412 0.8852
0.0057 54.0123 220 0.3386 0.8852
0.0057 55.0123 224 0.3742 0.8689
0.0057 56.0123 228 0.3378 0.8689
0.0051 57.0123 232 0.3133 0.8852
0.0051 58.0123 236 0.2935 0.8689
0.0048 59.0123 240 0.2900 0.8689
0.0048 60.0123 244 0.2831 0.8852
0.0048 61.0123 248 0.3001 0.8852
0.0048 62.0123 252 0.3225 0.8852
0.0048 63.0123 256 0.3803 0.8852
0.0046 64.0123 260 0.3333 0.8852
0.0046 65.0123 264 0.3481 0.8852
0.0046 66.0123 268 0.3178 0.8852
0.0045 67.0123 272 0.3117 0.8852
0.0045 68.0123 276 0.3241 0.8852
0.0044 69.0123 280 0.3243 0.8852
0.0044 70.0123 284 0.3431 0.8852
0.0044 71.0123 288 0.3221 0.8852
0.0044 72.0123 292 0.2979 0.8852
0.0044 73.0123 296 0.3110 0.8852
0.0043 74.0123 300 0.3329 0.8852
0.0043 75.0123 304 0.3023 0.8852
0.0043 76.0123 308 0.3122 0.8852
0.0045 77.0123 312 0.3068 0.8852
0.0045 78.0123 316 0.3510 0.8852
0.0043 79.0123 320 0.3064 0.8852
0.0043 80.0123 324 0.3018 0.8852
0.0043 81.0031 325 0.2982 0.8852

Framework versions

  • Transformers 4.48.0
  • Pytorch 2.0.1+cu118
  • Datasets 3.2.0
  • Tokenizers 0.21.0
Downloads last month
0
Safetensors
Model size
86.2M params
Tensor type
F32
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for ihsanabdulhakim/videomae-base-finetuned-ucf101-subset

Finetuned
(572)
this model