my_first_audio_cls / README.md
hoganpham's picture
End of training
161a8f9 verified
metadata
library_name: transformers
license: apache-2.0
base_model: facebook/wav2vec2-base
tags:
  - generated_from_trainer
datasets:
  - minds14
metrics:
  - accuracy
model-index:
  - name: my_first_audio_cls
    results:
      - task:
          name: Audio Classification
          type: audio-classification
        dataset:
          name: minds14
          type: minds14
          config: en-US
          split: train
          args: en-US
        metrics:
          - name: Accuracy
            type: accuracy
            value: 0.02654867256637168

my_first_audio_cls

This model is a fine-tuned version of facebook/wav2vec2-base on the minds14 dataset. It achieves the following results on the evaluation set:

  • Loss: 2.6974
  • Accuracy: 0.0265

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 3e-05
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 128
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 10

Training results

Training Loss Epoch Step Validation Loss Accuracy
No log 0.8 3 2.6409 0.0708
No log 1.8 6 2.6526 0.0442
No log 2.8 9 2.6640 0.0354
12.1127 3.8 12 2.6773 0.0354
12.1127 4.8 15 2.6847 0.0265
12.1127 5.8 18 2.6889 0.0177
12.0275 6.8 21 2.6939 0.0265
12.0275 7.8 24 2.6953 0.0265
12.0275 8.8 27 2.6968 0.0265
11.9952 9.8 30 2.6974 0.0265

Framework versions

  • Transformers 4.47.1
  • Pytorch 2.5.1+cu121
  • Datasets 3.2.0
  • Tokenizers 0.21.0