wav2vec2-base-nsc-demo-4

This model is a fine-tuned version of facebook/wav2vec2-base-960h on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.3016
  • Wer: 0.1720

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5.9591386586384804e-05
  • train_batch_size: 32
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 100
  • num_epochs: 51

Training results

Training Loss Epoch Step Validation Loss Wer
0.7542 2.27 50 0.3351 0.1948
0.3912 4.55 100 0.3016 0.1720
0.2497 6.82 150 0.3247 0.1757
0.201 9.09 200 0.3111 0.1728
0.1602 11.36 250 0.3259 0.1723
0.1334 13.64 300 0.3431 0.1765
0.1083 15.91 350 0.3413 0.1726
0.1114 18.18 400 0.4089 0.1768
0.0828 20.45 450 0.3531 0.1765
0.0926 22.73 500 0.3481 0.1755
0.093 25.0 550 0.3379 0.1742
0.0772 27.27 600 0.3628 0.1779
0.0701 29.55 650 0.3747 0.1773
0.0736 31.82 700 0.3834 0.1808
0.0607 34.09 750 0.3747 0.1742
0.0629 36.36 800 0.3683 0.1734
0.0713 38.64 850 0.3671 0.1744
0.0728 40.91 900 0.3632 0.1749
0.0696 43.18 950 0.3615 0.1731
0.0638 45.45 1000 0.3591 0.1755
0.0552 47.73 1050 0.3608 0.1779
0.0578 50.0 1100 0.3630 0.1752

Framework versions

  • Transformers 4.33.0
  • Pytorch 2.0.0
  • Datasets 2.1.0
  • Tokenizers 0.13.3
Downloads last month
22
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for shg1/wav2vec2-base-nsc-demo-4

Finetuned
(122)
this model