cantillation commited on
Commit
ef856ff
·
verified ·
1 Parent(s): a0fa6fc

Model save

Browse files
Files changed (2) hide show
  1. README.md +53 -9
  2. model.safetensors +1 -1
README.md CHANGED
@@ -1,23 +1,55 @@
1
  ---
2
  library_name: transformers
3
- language:
4
- - he
5
  license: apache-2.0
6
  base_model: openai/whisper-medium
7
  tags:
8
- - hf-asr-leaderboard
9
  - generated_from_trainer
 
 
10
  model-index:
11
- - name: he-cantillation
12
  results: []
13
  ---
14
 
15
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
16
  should probably proofread and complete it, then remove this comment. -->
17
 
18
- # he-cantillation
19
 
20
  This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on an unknown dataset.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
21
 
22
  ## Model description
23
 
@@ -37,17 +69,29 @@ More information needed
37
 
38
  The following hyperparameters were used during training:
39
  - learning_rate: 1e-05
40
- - train_batch_size: 16
41
- - eval_batch_size: 8
42
  - seed: 42
43
  - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
44
  - lr_scheduler_type: linear
45
- - lr_scheduler_warmup_steps: 500
46
- - training_steps: 2
47
  - mixed_precision_training: Native AMP
48
 
49
  ### Training results
50
 
 
 
 
 
 
 
 
 
 
 
 
 
51
 
52
 
53
  ### Framework versions
 
1
  ---
2
  library_name: transformers
 
 
3
  license: apache-2.0
4
  base_model: openai/whisper-medium
5
  tags:
 
6
  - generated_from_trainer
7
+ metrics:
8
+ - wer
9
  model-index:
10
+ - name: Teamim-medium_Random_WeightDecay-0.005_Augmented_New-Data_date-11-03-2025
11
  results: []
12
  ---
13
 
14
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
15
  should probably proofread and complete it, then remove this comment. -->
16
 
17
+ # Teamim-medium_Random_WeightDecay-0.005_Augmented_New-Data_date-11-03-2025
18
 
19
  This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on an unknown dataset.
20
+ It achieves the following results on the evaluation set:
21
+ - Loss: 0.0127
22
+ - Wer: 31.4035
23
+ - Avg Precision Exact: 0.5775
24
+ - Avg Recall Exact: 0.5799
25
+ - Avg F1 Exact: 0.5786
26
+ - Avg Precision Letter Shift: 0.5795
27
+ - Avg Recall Letter Shift: 0.5829
28
+ - Avg F1 Letter Shift: 0.5809
29
+ - Avg Precision Word Level: 0.5836
30
+ - Avg Recall Word Level: 0.5883
31
+ - Avg F1 Word Level: 0.5854
32
+ - Avg Precision Word Shift: 0.6875
33
+ - Avg Recall Word Shift: 0.7001
34
+ - Avg F1 Word Shift: 0.6917
35
+ - Precision Median Exact: 0.9545
36
+ - Recall Median Exact: 0.9621
37
+ - F1 Median Exact: 0.9613
38
+ - Precision Max Exact: 1.0
39
+ - Recall Max Exact: 1.0
40
+ - F1 Max Exact: 1.0
41
+ - Precision Min Exact: 0.0
42
+ - Recall Min Exact: 0.0
43
+ - F1 Min Exact: 0.0
44
+ - Precision Min Letter Shift: 0.0
45
+ - Recall Min Letter Shift: 0.0
46
+ - F1 Min Letter Shift: 0.0
47
+ - Precision Min Word Level: 0.0
48
+ - Recall Min Word Level: 0.0
49
+ - F1 Min Word Level: 0.0
50
+ - Precision Min Word Shift: 0.0
51
+ - Recall Min Word Shift: 0.0
52
+ - F1 Min Word Shift: 0.0
53
 
54
  ## Model description
55
 
 
69
 
70
  The following hyperparameters were used during training:
71
  - learning_rate: 1e-05
72
+ - train_batch_size: 8
73
+ - eval_batch_size: 2
74
  - seed: 42
75
  - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
76
  - lr_scheduler_type: linear
77
+ - lr_scheduler_warmup_steps: 1000
78
+ - training_steps: 10000
79
  - mixed_precision_training: Native AMP
80
 
81
  ### Training results
82
 
83
+ | Training Loss | Epoch | Step | Validation Loss | Wer | Avg Precision Exact | Avg Recall Exact | Avg F1 Exact | Avg Precision Letter Shift | Avg Recall Letter Shift | Avg F1 Letter Shift | Avg Precision Word Level | Avg Recall Word Level | Avg F1 Word Level | Avg Precision Word Shift | Avg Recall Word Shift | Avg F1 Word Shift | Precision Median Exact | Recall Median Exact | F1 Median Exact | Precision Max Exact | Recall Max Exact | F1 Max Exact | Precision Min Exact | Recall Min Exact | F1 Min Exact | Precision Min Letter Shift | Recall Min Letter Shift | F1 Min Letter Shift | Precision Min Word Level | Recall Min Word Level | F1 Min Word Level | Precision Min Word Shift | Recall Min Word Shift | F1 Min Word Shift |
84
+ |:-------------:|:------:|:-----:|:---------------:|:--------:|:-------------------:|:----------------:|:------------:|:--------------------------:|:-----------------------:|:-------------------:|:------------------------:|:---------------------:|:-----------------:|:------------------------:|:---------------------:|:-----------------:|:----------------------:|:-------------------:|:---------------:|:-------------------:|:----------------:|:------------:|:-------------------:|:----------------:|:------------:|:--------------------------:|:-----------------------:|:-------------------:|:------------------------:|:---------------------:|:-----------------:|:------------------------:|:---------------------:|:-----------------:|
85
+ | 0.3659 | 0.3101 | 1000 | 0.4443 | 55.3801 | 0.3563 | 0.3691 | 0.3620 | 0.3797 | 0.3929 | 0.3855 | 0.3883 | 0.4004 | 0.3937 | 0.5803 | 0.6096 | 0.5933 | 0.4104 | 0.4387 | 0.4264 | 0.8 | 0.8421 | 0.8205 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.05 | 0.0769 | 0.0606 |
86
+ | 0.1704 | 0.6202 | 2000 | 0.1796 | 108.5965 | 0.1652 | 0.1653 | 0.1651 | 0.1802 | 0.1801 | 0.1798 | 0.1877 | 0.1852 | 0.1857 | 0.3054 | 0.3010 | 0.3022 | 0.0 | 0.0 | 0.0 | 1.0 | 0.9545 | 0.9767 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
87
+ | 0.1597 | 0.9302 | 3000 | 0.1151 | 28.7719 | 0.5223 | 0.5324 | 0.5270 | 0.5340 | 0.5441 | 0.5387 | 0.5437 | 0.5504 | 0.5466 | 0.6927 | 0.7055 | 0.6983 | 0.6929 | 0.7042 | 0.6956 | 1.0 | 1.0 | 1.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
88
+ | 0.0831 | 1.2403 | 4000 | 0.0854 | 32.3977 | 0.4387 | 0.4427 | 0.4405 | 0.4455 | 0.4503 | 0.4477 | 0.4502 | 0.4553 | 0.4524 | 0.6281 | 0.6409 | 0.6335 | 0.1303 | 0.1366 | 0.1327 | 1.0 | 1.0 | 1.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
89
+ | 0.0738 | 1.5504 | 5000 | 0.0646 | 22.3977 | 0.5615 | 0.5614 | 0.5613 | 0.5675 | 0.5674 | 0.5673 | 0.5725 | 0.5739 | 0.5730 | 0.7393 | 0.7458 | 0.7419 | 0.7907 | 0.7980 | 0.7952 | 1.0 | 1.0 | 1.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
90
+ | 0.0913 | 1.8605 | 6000 | 0.0462 | 26.0234 | 0.5871 | 0.5894 | 0.5881 | 0.5930 | 0.5955 | 0.5941 | 0.5961 | 0.5991 | 0.5975 | 0.7016 | 0.7077 | 0.7033 | 0.8775 | 0.8819 | 0.8776 | 1.0 | 1.0 | 1.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
91
+ | 0.0279 | 2.1705 | 7000 | 0.0330 | 37.3684 | 0.4606 | 0.4657 | 0.4630 | 0.4649 | 0.4704 | 0.4675 | 0.4698 | 0.4757 | 0.4726 | 0.6104 | 0.6275 | 0.6165 | 0.0889 | 0.0883 | 0.0885 | 1.0 | 1.0 | 1.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
92
+ | 0.0581 | 2.4806 | 8000 | 0.0228 | 21.4620 | 0.6468 | 0.6467 | 0.6467 | 0.6506 | 0.6505 | 0.6504 | 0.6560 | 0.6569 | 0.6563 | 0.7490 | 0.7511 | 0.7496 | 0.9468 | 0.9524 | 0.9468 | 1.0 | 1.0 | 1.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
93
+ | 0.0321 | 2.7907 | 9000 | 0.0170 | 25.7895 | 0.6249 | 0.6274 | 0.6260 | 0.6283 | 0.6311 | 0.6295 | 0.6324 | 0.6348 | 0.6335 | 0.7323 | 0.7476 | 0.7373 | 0.9456 | 0.9506 | 0.9498 | 1.0 | 1.0 | 1.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
94
+ | 0.0479 | 3.1008 | 10000 | 0.0127 | 31.4035 | 0.5775 | 0.5799 | 0.5786 | 0.5795 | 0.5829 | 0.5809 | 0.5836 | 0.5883 | 0.5854 | 0.6875 | 0.7001 | 0.6917 | 0.9545 | 0.9621 | 0.9613 | 1.0 | 1.0 | 1.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
95
 
96
 
97
  ### Framework versions
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:d8c0606de390d8b818eb75f3d7edbd0c240bbfae6a09c56f38725e0cce38ef91
3
  size 3055671280
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:25e0943a3f4f892e8eeb4de2c4bbfa747b85e8e0669ef214e2604081a4bf569c
3
  size 3055671280