MubarakB commited on
Commit
e6d3d6e
1 Parent(s): 90b0152

End of training

Browse files
Files changed (2) hide show
  1. README.md +28 -1
  2. model.safetensors +1 -1
README.md CHANGED
@@ -8,17 +8,34 @@ tags:
8
  - generated_from_trainer
9
  datasets:
10
  - tericlabs
 
 
11
  model-index:
12
  - name: Whisper base Luganda
13
- results: []
 
 
 
 
 
 
 
 
 
 
14
  ---
15
 
16
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
17
  should probably proofread and complete it, then remove this comment. -->
18
 
 
19
  # Whisper base Luganda
20
 
21
  This model is a fine-tuned version of [openai/whisper-base.en](https://huggingface.co/openai/whisper-base.en) on the Sunbird dataset.
 
 
 
 
22
 
23
  ## Model description
24
 
@@ -49,6 +66,16 @@ The following hyperparameters were used during training:
49
  - training_steps: 4000
50
  - mixed_precision_training: Native AMP
51
 
 
 
 
 
 
 
 
 
 
 
52
  ### Framework versions
53
 
54
  - Transformers 4.42.0.dev0
 
8
  - generated_from_trainer
9
  datasets:
10
  - tericlabs
11
+ metrics:
12
+ - wer
13
  model-index:
14
  - name: Whisper base Luganda
15
+ results:
16
+ - task:
17
+ name: Automatic Speech Recognition
18
+ type: automatic-speech-recognition
19
+ dataset:
20
+ name: Sunbird
21
+ type: tericlabs
22
+ metrics:
23
+ - name: Wer
24
+ type: wer
25
+ value: 40.80100125156446
26
  ---
27
 
28
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
29
  should probably proofread and complete it, then remove this comment. -->
30
 
31
+ [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/ai-research-lab/huggingface/runs/p2aegccn)
32
  # Whisper base Luganda
33
 
34
  This model is a fine-tuned version of [openai/whisper-base.en](https://huggingface.co/openai/whisper-base.en) on the Sunbird dataset.
35
+ It achieves the following results on the evaluation set:
36
+ - Loss: 0.6134
37
+ - Wer: 40.8010
38
+ - Cer: 10.6921
39
 
40
  ## Model description
41
 
 
66
  - training_steps: 4000
67
  - mixed_precision_training: Native AMP
68
 
69
+ ### Training results
70
+
71
+ | Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
72
+ |:-------------:|:-------:|:----:|:---------------:|:-------:|:-------:|
73
+ | 0.3459 | 6.3694 | 1000 | 0.5885 | 45.1815 | 14.4972 |
74
+ | 0.0532 | 12.7389 | 2000 | 0.5441 | 38.6733 | 10.0723 |
75
+ | 0.0108 | 19.1083 | 3000 | 0.6118 | 39.9249 | 10.2789 |
76
+ | 0.0044 | 25.4777 | 4000 | 0.6134 | 40.8010 | 10.6921 |
77
+
78
+
79
  ### Framework versions
80
 
81
  - Transformers 4.42.0.dev0
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:97e6a9208176c2f05cd2827bc2cc95dca42b2e9cac43d709e311de5c08c8d49b
3
  size 290401888
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:df1b0bedb197681c2821b868c33e6d9740111ff00c9b0d24c1df843272750417
3
  size 290401888