poojastl2024 commited on
Commit
c2bce3b
·
verified ·
1 Parent(s): eb51ce1

Model save

Browse files
Files changed (1) hide show
  1. README.md +26 -6
README.md CHANGED
@@ -4,6 +4,8 @@ license: apache-2.0
4
  base_model: openai/whisper-large-v3
5
  tags:
6
  - generated_from_trainer
 
 
7
  model-index:
8
  - name: whisper-large-v3-lora-bn-en-banking
9
  results: []
@@ -15,6 +17,10 @@ should probably proofread and complete it, then remove this comment. -->
15
  # whisper-large-v3-lora-bn-en-banking
16
 
17
  This model is a fine-tuned version of [openai/whisper-large-v3](https://huggingface.co/openai/whisper-large-v3) on an unknown dataset.
 
 
 
 
18
 
19
  ## Model description
20
 
@@ -33,20 +39,34 @@ More information needed
33
  ### Training hyperparameters
34
 
35
  The following hyperparameters were used during training:
36
- - learning_rate: 0.0002
37
- - train_batch_size: 4
38
- - eval_batch_size: 2
39
  - seed: 42
40
  - gradient_accumulation_steps: 8
41
- - total_train_batch_size: 32
42
- - optimizer: Use OptimizerNames.PAGED_ADAMW_8BIT with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
 
43
  - lr_scheduler_type: linear
44
  - lr_scheduler_warmup_ratio: 0.1
45
- - num_epochs: 5
46
  - mixed_precision_training: Native AMP
47
 
48
  ### Training results
49
 
 
 
 
 
 
 
 
 
 
 
 
 
 
50
 
51
 
52
  ### Framework versions
 
4
  base_model: openai/whisper-large-v3
5
  tags:
6
  - generated_from_trainer
7
+ metrics:
8
+ - wer
9
  model-index:
10
  - name: whisper-large-v3-lora-bn-en-banking
11
  results: []
 
17
  # whisper-large-v3-lora-bn-en-banking
18
 
19
  This model is a fine-tuned version of [openai/whisper-large-v3](https://huggingface.co/openai/whisper-large-v3) on an unknown dataset.
20
+ It achieves the following results on the evaluation set:
21
+ - Loss: 4.6914
22
+ - Wer: 98.5075
23
+ - Cer: 94.7368
24
 
25
  ## Model description
26
 
 
39
  ### Training hyperparameters
40
 
41
  The following hyperparameters were used during training:
42
+ - learning_rate: 1e-05
43
+ - train_batch_size: 1
44
+ - eval_batch_size: 8
45
  - seed: 42
46
  - gradient_accumulation_steps: 8
47
+ - total_train_batch_size: 8
48
+ - optimizer: Use OptimizerNames.ADAFACTOR and the args are:
49
+ No additional optimizer arguments
50
  - lr_scheduler_type: linear
51
  - lr_scheduler_warmup_ratio: 0.1
52
+ - num_epochs: 100
53
  - mixed_precision_training: Native AMP
54
 
55
  ### Training results
56
 
57
+ | Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
58
+ |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|
59
+ | 2.3917 | 1.0 | 1 | 4.8753 | 98.5075 | 94.7368 |
60
+ | 2.3917 | 2.0 | 2 | 4.8739 | 98.5075 | 94.7368 |
61
+ | 2.3911 | 3.0 | 3 | 4.8704 | 98.5075 | 94.7368 |
62
+ | 2.3893 | 4.0 | 4 | 4.8605 | 98.5075 | 94.7368 |
63
+ | 2.3856 | 5.0 | 5 | 4.8488 | 98.5075 | 94.7368 |
64
+ | 2.3801 | 6.0 | 6 | 4.8313 | 98.5075 | 94.7368 |
65
+ | 2.3728 | 7.0 | 7 | 4.8101 | 98.5075 | 94.7368 |
66
+ | 2.3642 | 8.0 | 8 | 4.7857 | 98.5075 | 94.7368 |
67
+ | 2.354 | 9.0 | 9 | 4.7571 | 98.5075 | 94.7368 |
68
+ | 2.3421 | 10.0 | 10 | 4.7262 | 98.5075 | 94.7368 |
69
+ | 2.3302 | 11.0 | 11 | 4.6914 | 98.5075 | 94.7368 |
70
 
71
 
72
  ### Framework versions