KevinKibe commited on
Commit
cebd597
·
verified ·
1 Parent(s): 76ab158

Model save

Browse files
Files changed (1) hide show
  1. README.md +18 -17
README.md CHANGED
@@ -1,7 +1,5 @@
1
  ---
2
- base_model: KevinKibe/whisper-medium-finetuned
3
- datasets:
4
- - common_voice_16_1
5
  library_name: peft
6
  license: apache-2.0
7
  tags:
@@ -14,17 +12,19 @@ model-index:
14
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
15
  should probably proofread and complete it, then remove this comment. -->
16
 
 
 
17
  # whisper-medium-finetuned-finetuned
18
 
19
- This model is a fine-tuned version of [KevinKibe/whisper-medium-finetuned](https://huggingface.co/KevinKibe/whisper-medium-finetuned) on the common_voice_16_1 dataset.
20
  It achieves the following results on the evaluation set:
21
- - eval_loss: 3.1774
22
- - eval_wer: 77.2386
23
- - eval_runtime: 382.2171
24
- - eval_samples_per_second: 0.654
25
- - eval_steps_per_second: 0.042
26
- - epoch: 1.37
27
- - step: 100
28
 
29
  ## Model description
30
 
@@ -43,19 +43,20 @@ More information needed
43
  ### Training hyperparameters
44
 
45
  The following hyperparameters were used during training:
46
- - learning_rate: 0.0001
47
- - train_batch_size: 16
48
- - eval_batch_size: 16
49
  - seed: 42
50
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
51
  - lr_scheduler_type: linear
52
- - training_steps: 100
 
53
  - mixed_precision_training: Native AMP
54
 
55
  ### Framework versions
56
 
57
  - PEFT 0.11.1
58
- - Transformers 4.39.2
59
  - Pytorch 2.2.2+cu121
60
  - Datasets 2.19.2
61
- - Tokenizers 0.15.2
 
1
  ---
2
+ base_model: openai/whisper-medium
 
 
3
  library_name: peft
4
  license: apache-2.0
5
  tags:
 
12
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
13
  should probably proofread and complete it, then remove this comment. -->
14
 
15
+ [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/keviinkibe/huggingface/runs/hv5p2q4p)
16
+ [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/keviinkibe/huggingface/runs/hv5p2q4p)
17
  # whisper-medium-finetuned-finetuned
18
 
19
+ This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the None dataset.
20
  It achieves the following results on the evaluation set:
21
+ - eval_loss: 5.4740
22
+ - eval_wer: 103.4838
23
+ - eval_runtime: 542.8842
24
+ - eval_samples_per_second: 0.461
25
+ - eval_steps_per_second: 0.059
26
+ - epoch: 0.25
27
+ - step: 500
28
 
29
  ## Model description
30
 
 
43
  ### Training hyperparameters
44
 
45
  The following hyperparameters were used during training:
46
+ - learning_rate: 0.001
47
+ - train_batch_size: 32
48
+ - eval_batch_size: 8
49
  - seed: 42
50
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
51
  - lr_scheduler_type: linear
52
+ - lr_scheduler_warmup_steps: 20
53
+ - training_steps: 2000
54
  - mixed_precision_training: Native AMP
55
 
56
  ### Framework versions
57
 
58
  - PEFT 0.11.1
59
+ - Transformers 4.42.3
60
  - Pytorch 2.2.2+cu121
61
  - Datasets 2.19.2
62
+ - Tokenizers 0.19.1