gokulsrinivasagan commited on
Commit
576ef2f
·
verified ·
1 Parent(s): a8179a1

End of training

Browse files
README.md CHANGED
@@ -23,7 +23,7 @@ model-index:
23
  metrics:
24
  - name: Accuracy
25
  type: accuracy
26
- value: 0.8044064748201439
27
  ---
28
 
29
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -33,8 +33,8 @@ should probably proofread and complete it, then remove this comment. -->
33
 
34
  This model is a fine-tuned version of [openai/whisper-tiny.en](https://huggingface.co/openai/whisper-tiny.en) on the speech_commands dataset.
35
  It achieves the following results on the evaluation set:
36
- - Loss: 0.9815
37
- - Accuracy: 0.8044
38
 
39
  ## Model description
40
 
@@ -54,11 +54,11 @@ More information needed
54
 
55
  The following hyperparameters were used during training:
56
  - learning_rate: 5e-05
57
- - train_batch_size: 96
58
- - eval_batch_size: 96
59
  - seed: 42
60
  - gradient_accumulation_steps: 4
61
- - total_train_batch_size: 384
62
  - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
63
  - lr_scheduler_type: linear
64
  - lr_scheduler_warmup_ratio: 0.1
@@ -69,9 +69,9 @@ The following hyperparameters were used during training:
69
 
70
  | Training Loss | Epoch | Step | Validation Loss | Accuracy |
71
  |:-------------:|:-----:|:----:|:---------------:|:--------:|
72
- | 0.7314 | 1.0 | 103 | 1.1991 | 0.7968 |
73
- | 0.2414 | 2.0 | 206 | 0.9815 | 0.8044 |
74
- | 0.1419 | 3.0 | 309 | 0.9754 | 0.8040 |
75
 
76
 
77
  ### Framework versions
 
23
  metrics:
24
  - name: Accuracy
25
  type: accuracy
26
+ value: 0.8057553956834532
27
  ---
28
 
29
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
33
 
34
  This model is a fine-tuned version of [openai/whisper-tiny.en](https://huggingface.co/openai/whisper-tiny.en) on the speech_commands dataset.
35
  It achieves the following results on the evaluation set:
36
+ - Loss: 1.1281
37
+ - Accuracy: 0.8058
38
 
39
  ## Model description
40
 
 
54
 
55
  The following hyperparameters were used during training:
56
  - learning_rate: 5e-05
57
+ - train_batch_size: 48
58
+ - eval_batch_size: 48
59
  - seed: 42
60
  - gradient_accumulation_steps: 4
61
+ - total_train_batch_size: 192
62
  - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
63
  - lr_scheduler_type: linear
64
  - lr_scheduler_warmup_ratio: 0.1
 
69
 
70
  | Training Loss | Epoch | Step | Validation Loss | Accuracy |
71
  |:-------------:|:-----:|:----:|:---------------:|:--------:|
72
+ | 0.3022 | 1.0 | 206 | 1.1935 | 0.7972 |
73
+ | 0.1455 | 2.0 | 412 | 1.1336 | 0.8008 |
74
+ | 0.0752 | 3.0 | 618 | 1.1281 | 0.8058 |
75
 
76
 
77
  ### Framework versions
runs/Oct04_22-43-51_ki-g0008/events.out.tfevents.1759621205.ki-g0008.3679513.1 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bf70c7b638368102aee105e9cbec0737b4692476963264a8deb759ad4752a6e6
3
+ size 411