numan98 commited on
Commit
5896287
·
verified ·
1 Parent(s): d31a5af

End of training

Browse files
Files changed (2) hide show
  1. README.md +83 -0
  2. generation_config.json +13 -0
README.md ADDED
@@ -0,0 +1,83 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ language:
4
+ - ar
5
+ license: apache-2.0
6
+ base_model: tarteel-ai/whisper-tiny-ar-quran
7
+ tags:
8
+ - generated_from_trainer
9
+ datasets:
10
+ - numan98/synth-incorrect-verses
11
+ metrics:
12
+ - wer
13
+ model-index:
14
+ - name: Nextayah Tiny Whisper Finetuned
15
+ results:
16
+ - task:
17
+ name: Automatic Speech Recognition
18
+ type: automatic-speech-recognition
19
+ dataset:
20
+ name: Synthetic Incorrect Verses
21
+ type: numan98/synth-incorrect-verses
22
+ config: default
23
+ split: None
24
+ args: 'split: test'
25
+ metrics:
26
+ - name: Wer
27
+ type: wer
28
+ value: 17.25043782837128
29
+ ---
30
+
31
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
32
+ should probably proofread and complete it, then remove this comment. -->
33
+
34
+ # Nextayah Tiny Whisper Finetuned
35
+
36
+ This model is a fine-tuned version of [tarteel-ai/whisper-tiny-ar-quran](https://huggingface.co/tarteel-ai/whisper-tiny-ar-quran) on the Synthetic Incorrect Verses dataset.
37
+ It achieves the following results on the evaluation set:
38
+ - Loss: 0.0921
39
+ - Wer: 17.2504
40
+
41
+ ## Model description
42
+
43
+ More information needed
44
+
45
+ ## Intended uses & limitations
46
+
47
+ More information needed
48
+
49
+ ## Training and evaluation data
50
+
51
+ More information needed
52
+
53
+ ## Training procedure
54
+
55
+ ### Training hyperparameters
56
+
57
+ The following hyperparameters were used during training:
58
+ - learning_rate: 0.0001
59
+ - train_batch_size: 16
60
+ - eval_batch_size: 8
61
+ - seed: 42
62
+ - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
63
+ - lr_scheduler_type: linear
64
+ - lr_scheduler_warmup_steps: 500
65
+ - training_steps: 2000
66
+ - mixed_precision_training: Native AMP
67
+
68
+ ### Training results
69
+
70
+ | Training Loss | Epoch | Step | Validation Loss | Wer |
71
+ |:-------------:|:-------:|:----:|:---------------:|:-------:|
72
+ | 0.0217 | 8.7719 | 500 | 0.1236 | 22.5044 |
73
+ | 0.0025 | 17.5439 | 1000 | 0.1063 | 21.0158 |
74
+ | 0.0001 | 26.3158 | 1500 | 0.0910 | 17.4256 |
75
+ | 0.0001 | 35.0877 | 2000 | 0.0921 | 17.2504 |
76
+
77
+
78
+ ### Framework versions
79
+
80
+ - Transformers 4.48.0
81
+ - Pytorch 2.5.1+cu121
82
+ - Datasets 3.2.0
83
+ - Tokenizers 0.21.0
generation_config.json ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "begin_suppress_tokens": [
3
+ 220,
4
+ 50257
5
+ ],
6
+ "bos_token_id": 50257,
7
+ "decoder_start_token_id": 50258,
8
+ "eos_token_id": 50257,
9
+ "max_length": 448,
10
+ "pad_token_id": 50257,
11
+ "transformers_version": "4.48.0",
12
+ "use_cache": false
13
+ }