ReVoiceAI-W2V-BERT-Thai-IPA
This model is a fine-tuned version of facebook/w2v-bert-2.0 on the common_voice_17_0 dataset. It achieves the following results on the evaluation set:
- Loss: 0.2532
- Wer: 0.0847
- Cer: 0.0276
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 30
Training results
Training Loss | Epoch | Step | Cer | Validation Loss | Wer |
---|---|---|---|---|---|
1.6842 | 1.0 | 1154 | 0.1430 | 0.7152 | 0.5133 |
0.4338 | 2.0 | 2308 | 0.0879 | 0.3823 | 0.2990 |
0.3128 | 3.0 | 3462 | 0.0705 | 0.2929 | 0.2408 |
0.2398 | 4.0 | 4616 | 0.0571 | 0.2670 | 0.1937 |
0.2061 | 5.0 | 5770 | 0.0576 | 0.2584 | 0.1878 |
0.1834 | 6.0 | 6924 | 0.0514 | 0.2383 | 0.1687 |
0.147 | 7.0 | 8078 | 0.0475 | 0.2141 | 0.1547 |
0.1412 | 8.0 | 9232 | 0.0465 | 0.2291 | 0.1530 |
0.1187 | 9.0 | 10386 | 0.0438 | 0.1988 | 0.1400 |
0.1055 | 10.0 | 11540 | 0.0466 | 0.2579 | 0.1581 |
0.1001 | 11.0 | 12694 | 0.0400 | 0.2064 | 0.1304 |
0.0923 | 12.0 | 13848 | 0.0374 | 0.2115 | 0.1188 |
0.081 | 13.0 | 15002 | 0.0379 | 0.2080 | 0.1196 |
0.0734 | 14.0 | 16156 | 0.0347 | 0.1915 | 0.1110 |
0.06 | 15.0 | 17310 | 0.0371 | 0.2192 | 0.1169 |
0.0576 | 16.0 | 18464 | 0.0363 | 0.1919 | 0.1156 |
0.0495 | 17.0 | 19618 | 0.0345 | 0.1928 | 0.1073 |
0.0493 | 18.0 | 20772 | 0.0338 | 0.1889 | 0.1089 |
0.0396 | 19.0 | 21926 | 0.0321 | 0.1961 | 0.0994 |
0.0371 | 20.0 | 23080 | 0.0319 | 0.1946 | 0.0992 |
0.0312 | 21.0 | 24234 | 0.0313 | 0.2144 | 0.0972 |
0.0263 | 22.0 | 25388 | 0.0314 | 0.2076 | 0.0972 |
0.0224 | 23.0 | 26542 | 0.0309 | 0.2165 | 0.0958 |
0.0202 | 24.0 | 27696 | 0.0301 | 0.2221 | 0.0924 |
0.0165 | 25.0 | 28850 | 0.0314 | 0.2358 | 0.0955 |
0.016 | 26.0 | 30004 | 0.0296 | 0.2357 | 0.0913 |
0.0111 | 27.0 | 31158 | 0.0296 | 0.2265 | 0.0906 |
0.0079 | 28.0 | 32312 | 0.0289 | 0.2357 | 0.0892 |
0.0081 | 29.0 | 33466 | 0.2474 | 0.0861 | 0.0279 |
0.0053 | 30.0 | 34620 | 0.2532 | 0.0847 | 0.0276 |
Framework versions
- Transformers 4.53.0
- Pytorch 2.7.1+cu126
- Datasets 3.6.0
- Tokenizers 0.21.2
- Downloads last month
- 86
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support