Edit model card

nb-whisper-large-v0.8

This model is a fine-tuned version of NbAiLab/nb-whisper-large-v3-RC4 on the NbAiLab/ncc_speech_styling_v2 dataset. It achieves the following results on the evaluation set:

  • step: 49999
  • validation_nst_loss: 0.4309
  • train_loss: 0.4828
  • validation_nst_wer: 2.2211
  • validation_nst_cer: 0.6758
  • validation_nst_exact_wer: 2.7655
  • validation_nst_exact_cer: 0.7592
  • validation_clean_stortinget_no_loss: 0.7845
  • validation_clean_stortinget_no_wer: 8.8323
  • validation_clean_stortinget_no_cer: 5.6753
  • validation_clean_stortinget_no_exact_wer: 11.6973
  • validation_clean_stortinget_no_exact_cer: 6.1161

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 7e-05
  • lr_scheduler_type: linear
  • per_device_train_batch_size: 8
  • total_train_batch_size_per_node: 32
  • total_train_batch_size: 1024
  • total_optimization_steps: 50,000
  • starting_optimization_step: None
  • finishing_optimization_step: 50,000
  • num_train_dataset_workers: 32
  • num_hosts: 32
  • total_num_training_examples: 51,200,000
  • steps_per_epoch: 7254
  • num_beams: None
  • weight_decay: 0.01
  • adam_beta1: 0.9
  • adam_beta2: 0.98
  • adam_epsilon: 1e-06
  • dropout: True
  • bpe_dropout_probability: 0.2
  • activation_dropout_probability: 0.1

Training results

step validation_nst_loss train_loss validation_nst_wer validation_nst_cer validation_nst_exact_wer validation_nst_exact_cer validation_clean_stortinget_no_loss validation_clean_stortinget_no_wer validation_clean_stortinget_no_cer validation_clean_stortinget_no_exact_wer validation_clean_stortinget_no_exact_cer
0 0.4265 0.9701 2.1721 0.6246 2.7056 0.7070 0.6866 8.5836 5.4517 11.4126 5.8853
5000 0.4380 0.6065 2.5750 0.7495 3.0922 0.8251 0.6988 9.1284 5.8272 12.0840 6.2946
10000 0.4366 0.5640 2.3191 0.6852 2.8417 0.7647 0.7061 9.1378 5.7729 12.0270 6.2225
15000 0.4370 0.5506 2.3300 0.7066 2.9234 0.7976 0.7213 8.9673 5.6884 11.9511 6.1640
20000 0.4328 0.5284 2.3300 0.7019 2.8962 0.7885 0.7674 8.8915 5.6535 11.7922 6.1013
25000 0.4334 0.5133 2.3082 0.7010 2.9016 0.7903 0.7697 9.0194 5.7983 11.8468 6.2373
30000 0.4301 0.4996 2.1721 0.6674 2.6948 0.7464 0.7732 8.9223 5.7229 11.8349 6.1726
35000 0.4310 0.4957 2.2592 0.6926 2.8472 0.7830 0.7882 8.9744 5.7804 11.8871 6.2323
40000 0.4301 0.4999 2.1939 0.6647 2.7165 0.7436 0.7899 8.8868 5.6412 11.7708 6.0880
45000 0.4306 0.5049 2.2320 0.6768 2.7819 0.7628 0.7766 8.8252 5.6686 11.6902 6.1087
49999 0.4309 0.4828 2.2211 0.6758 2.7655 0.7592
49999 0.7845 0.4828 8.8323 5.6753 11.6973 6.1161

Framework versions

  • Transformers 4.36.2
  • Datasets 2.16.0
  • Tokenizers 0.15.0
Downloads last month
9
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for NbAiLab/nb-whisper-large-v0.8

Finetuned
(1)
this model