mms-1b-50_450h-hau-ft

This model is a fine-tuned version of facebook/mms-1b-all on the /MNT/MD0/SYNVOICES/DATA/HAUSA_50_450H - NA dataset. It achieves the following results on the evaluation set:

  • Loss: 0.3582
  • Wer: 0.3740
  • Cer: 0.0953

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.001
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • distributed_type: multi-GPU
  • num_devices: 2
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 32
  • total_eval_batch_size: 16
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 100
  • num_epochs: 2.0
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer Cer
0.2783 0.0363 500 0.4821 0.4350 0.1141
0.2249 0.0727 1000 0.4596 0.4268 0.1127
0.201 0.1090 1500 0.4493 0.4236 0.1122
0.2836 0.1453 2000 0.4501 0.4269 0.1122
0.294 0.1816 2500 0.4369 0.4279 0.1121
0.1831 0.2180 3000 0.4406 0.4227 0.1094
0.2472 0.2543 3500 0.4310 0.4306 0.1119
0.2312 0.2906 4000 0.4182 0.4116 0.1065
0.1278 0.3269 4500 0.4221 0.4164 0.1093
0.2569 0.3633 5000 0.4207 0.4193 0.1093
0.268 0.3996 5500 0.4085 0.4223 0.1083
0.1984 0.4359 6000 0.4078 0.4166 0.1082
0.1848 0.4722 6500 0.4134 0.4102 0.1065
0.1408 0.5086 7000 0.4032 0.4076 0.1052
0.1363 0.5449 7500 0.3993 0.4100 0.1054
0.1362 0.5812 8000 0.4061 0.4023 0.1038
0.12 0.6175 8500 0.3991 0.4084 0.1043
0.1341 0.6539 9000 0.4010 0.4113 0.1050
0.1619 0.6902 9500 0.3966 0.4132 0.1053
0.1958 0.7265 10000 0.3942 0.4064 0.1037
0.1029 0.7628 10500 0.4000 0.4070 0.1053
0.1623 0.7992 11000 0.3934 0.4167 0.1066
0.1601 0.8355 11500 0.3878 0.3950 0.1016
0.1164 0.8718 12000 0.3897 0.3962 0.1020
0.1425 0.9081 12500 0.3844 0.3936 0.1011
0.1415 0.9445 13000 0.3891 0.3991 0.1027
0.1581 0.9808 13500 0.3834 0.3967 0.1024
0.0911 1.0171 14000 0.3841 0.3966 0.1020
0.1681 1.0534 14500 0.3833 0.3936 0.1013
0.0936 1.0897 15000 0.3760 0.3928 0.1007
0.153 1.1260 15500 0.3785 0.4009 0.1019
0.1341 1.1624 16000 0.3754 0.4000 0.1021
0.1054 1.1987 16500 0.3736 0.3971 0.1013
0.1736 1.2350 17000 0.3765 0.3990 0.1019
0.1225 1.2714 17500 0.3725 0.3892 0.0998
0.1278 1.3077 18000 0.3712 0.3830 0.0982
0.1387 1.3440 18500 0.3734 0.3904 0.0997
0.1316 1.3803 19000 0.3707 0.3889 0.0990
0.1508 1.4167 19500 0.3728 0.3784 0.0969
0.1016 1.4530 20000 0.3710 0.3991 0.1003
0.1331 1.4893 20500 0.3667 0.3844 0.0982
0.1565 1.5256 21000 0.3638 0.3848 0.0979
0.2197 1.5620 21500 0.3638 0.3801 0.0967
0.1962 1.5983 22000 0.3653 0.3804 0.0968
0.1322 1.6346 22500 0.3637 0.3808 0.0967
0.0772 1.6709 23000 0.3619 0.3788 0.0962
0.1107 1.7073 23500 0.3637 0.3847 0.0972
0.1919 1.7436 24000 0.3645 0.3778 0.0962
0.0596 1.7799 24500 0.3619 0.3743 0.0953
0.1335 1.8162 25000 0.3598 0.3728 0.0951
0.1303 1.8526 25500 0.3603 0.3724 0.0950
0.1777 1.8889 26000 0.3591 0.3749 0.0955
0.1977 1.9252 26500 0.3586 0.3739 0.0953
0.1497 1.9615 27000 0.3577 0.3741 0.0953
0.1043 1.9979 27500 0.3582 0.3740 0.0953

Framework versions

  • Transformers 4.48.1
  • Pytorch 2.5.1+cu121
  • Datasets 3.2.0
  • Tokenizers 0.21.0
Downloads last month
7
Safetensors
Model size
965M params
Tensor type
F32
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for CLEAR-Global/mms-1b-50_450h-hau-ft

Finetuned
(285)
this model

Collection including CLEAR-Global/mms-1b-50_450h-hau-ft