File size: 2,326 Bytes
df4a20f fea8abf 2528c78 fea8abf 0ab58d5 fea8abf 0ab58d5 9cdc7ce 2528c78 0ab58d5 2528c78 9cdc7ce 2528c78 df4a20f fea8abf df4a20f 0ab58d5 df4a20f 9cdc7ce 0ab58d5 9cdc7ce df4a20f fea8abf df4a20f fea8abf df4a20f fea8abf df4a20f fea8abf df4a20f fea8abf df4a20f fea8abf df4a20f fea8abf df4a20f fea8abf df4a20f fea8abf df4a20f 0ab58d5 9cdc7ce 0ab58d5 fea8abf df4a20f fea8abf |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 |
---
language:
- ar
license: apache-2.0
tags:
- generated_from_trainer
base_model: tarteel-ai/whisper-base-ar-quran
datasets:
- zolfa
metrics:
- wer
model-index:
- name: Zolfa-raghadomar
results:
- task:
type: automatic-speech-recognition
name: Automatic Speech Recognition
dataset:
name: Zolfa Dataset
type: zolfa
args: 'config: ar, split: test'
metrics:
- type: wer
value: 14.285714285714285
name: Wer
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Zolfa-raghadomar
This model is a fine-tuned version of [tarteel-ai/whisper-base-ar-quran](https://huggingface.co/tarteel-ai/whisper-base-ar-quran) on the Zolfa Dataset dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2374
- Wer: 14.2857
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 5
- training_steps: 1000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:-------:|
| 0.0669 | 0.6993 | 100 | 0.2045 | 21.4286 |
| 0.024 | 1.3986 | 200 | 0.2314 | 17.3469 |
| 0.0085 | 2.0979 | 300 | 0.2133 | 16.3265 |
| 0.0085 | 2.7972 | 400 | 0.2120 | 14.2857 |
| 0.0052 | 3.4965 | 500 | 0.2225 | 14.2857 |
| 0.0036 | 4.1958 | 600 | 0.2354 | 14.2857 |
| 0.0016 | 4.8951 | 700 | 0.2322 | 14.2857 |
| 0.0002 | 5.5944 | 800 | 0.2373 | 14.2857 |
| 0.0003 | 6.2937 | 900 | 0.2372 | 14.2857 |
| 0.0012 | 6.9930 | 1000 | 0.2374 | 14.2857 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.19.2
- Tokenizers 0.19.1
|