JUJORUME commited on
Commit
879cc99
·
1 Parent(s): a03b85e

End of training

Browse files
README.md CHANGED
@@ -5,6 +5,8 @@ license: apache-2.0
5
  base_model: openai/whisper-medium
6
  tags:
7
  - generated_from_trainer
 
 
8
  model-index:
9
  - name: FT-Spanish-openai/whisper-medium
10
  results: []
@@ -15,7 +17,10 @@ should probably proofread and complete it, then remove this comment. -->
15
 
16
  # FT-Spanish-openai/whisper-medium
17
 
18
- This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the None dataset.
 
 
 
19
 
20
  ## Model description
21
 
@@ -43,16 +48,34 @@ The following hyperparameters were used during training:
43
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
44
  - lr_scheduler_type: linear
45
  - lr_scheduler_warmup_steps: 500
46
- - training_steps: 100
47
  - mixed_precision_training: Native AMP
48
 
49
  ### Training results
50
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
51
 
52
 
53
  ### Framework versions
54
 
55
- - Transformers 4.36.0.dev0
56
- - Pytorch 2.1.0+cu118
57
  - Datasets 2.15.0
58
  - Tokenizers 0.15.0
 
5
  base_model: openai/whisper-medium
6
  tags:
7
  - generated_from_trainer
8
+ metrics:
9
+ - wer
10
  model-index:
11
  - name: FT-Spanish-openai/whisper-medium
12
  results: []
 
17
 
18
  # FT-Spanish-openai/whisper-medium
19
 
20
+ This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on an unknown dataset.
21
+ It achieves the following results on the evaluation set:
22
+ - Loss: 0.0000
23
+ - Wer: 0.0
24
 
25
  ## Model description
26
 
 
48
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
49
  - lr_scheduler_type: linear
50
  - lr_scheduler_warmup_steps: 500
51
+ - training_steps: 4000
52
  - mixed_precision_training: Native AMP
53
 
54
  ### Training results
55
 
56
+ | Training Loss | Epoch | Step | Validation Loss | Wer |
57
+ |:-------------:|:-----:|:----:|:---------------:|:------:|
58
+ | 0.0134 | 4.83 | 250 | 0.0090 | 3.1136 |
59
+ | 0.0204 | 9.66 | 500 | 0.0153 | 2.9471 |
60
+ | 0.0248 | 14.49 | 750 | 0.0164 | 2.8472 |
61
+ | 0.0062 | 19.32 | 1000 | 0.0033 | 0.2581 |
62
+ | 0.0032 | 24.15 | 1250 | 0.0031 | 0.0833 |
63
+ | 0.0023 | 28.99 | 1500 | 0.0004 | 0.0083 |
64
+ | 0.0001 | 33.82 | 1750 | 0.0001 | 0.0 |
65
+ | 0.0 | 38.65 | 2000 | 0.0000 | 0.0 |
66
+ | 0.0 | 43.48 | 2250 | 0.0000 | 0.0 |
67
+ | 0.0 | 48.31 | 2500 | 0.0000 | 0.0 |
68
+ | 0.0 | 53.14 | 2750 | 0.0000 | 0.0 |
69
+ | 0.0 | 57.97 | 3000 | 0.0000 | 0.0 |
70
+ | 0.0 | 62.8 | 3250 | 0.0000 | 0.0 |
71
+ | 0.0 | 67.63 | 3500 | 0.0000 | 0.0 |
72
+ | 0.0 | 72.46 | 3750 | 0.0000 | 0.0 |
73
+ | 0.0 | 77.29 | 4000 | 0.0000 | 0.0 |
74
 
75
 
76
  ### Framework versions
77
 
78
+ - Transformers 4.35.2
79
+ - Pytorch 2.0.1+cu117
80
  - Datasets 2.15.0
81
  - Tokenizers 0.15.0
generation_config.json CHANGED
@@ -152,10 +152,11 @@
152
  "max_length": 448,
153
  "no_timestamps_token_id": 50363,
154
  "pad_token_id": 50257,
 
155
  "suppress_tokens": [],
156
  "task_to_id": {
157
  "transcribe": 50359,
158
  "translate": 50358
159
  },
160
- "transformers_version": "4.36.0.dev0"
161
  }
 
152
  "max_length": 448,
153
  "no_timestamps_token_id": 50363,
154
  "pad_token_id": 50257,
155
+ "return_timestamps": false,
156
  "suppress_tokens": [],
157
  "task_to_id": {
158
  "transcribe": 50359,
159
  "translate": 50358
160
  },
161
+ "transformers_version": "4.35.2"
162
  }
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:97818b51bf62a47db25dfc5846b415f03efdc4fa464ed7e5215c5e78969a67ba
3
  size 3055544304
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e8ac12757438e060d5a7e30835babe714139c7249bcdc1a1e38c917b2012fd81
3
  size 3055544304
runs/Dec05_16-42-29_DESKTOP-IB90627/events.out.tfevents.1701812560.DESKTOP-IB90627 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:98f7cada74e7244679f5e5e0c2b01bfff3fb93f5f77c82198efac18114b4c5dc
3
- size 27430
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a79015c542cffb70ca57831cab747144e6ee88a25aad29619eb913b9c016a4e3
3
+ size 35654