Training in progress, step 2000
Browse files- README.md +30 -1
- model.safetensors +1 -1
- runs/Feb14_22-51-17_sipl-7542-ct/events.out.tfevents.1707951078.sipl-7542-ct.1177613.3 +2 -2
- runs/Feb14_22-58-50_sipl-7542-ct/events.out.tfevents.1707951535.sipl-7542-ct.1185350.0 +3 -0
- runs/Feb14_23-00-02_sipl-7542-ct/events.out.tfevents.1707951604.sipl-7542-ct.1185350.1 +3 -0
- runs/Feb14_23-02-06_sipl-7542-ct/events.out.tfevents.1707951731.sipl-7542-ct.1186316.0 +3 -0
- runs/Feb15_06-05-10_sipl-7542-ct/events.out.tfevents.1707977115.sipl-7542-ct.1319778.0 +3 -0
- training_args.bin +1 -1
README.md
CHANGED
@@ -6,6 +6,10 @@ base_model: openai/whisper-medium
|
|
6 |
tags:
|
7 |
- hf-asr-leaderboard
|
8 |
- generated_from_trainer
|
|
|
|
|
|
|
|
|
9 |
model-index:
|
10 |
- name: he
|
11 |
results: []
|
@@ -17,6 +21,20 @@ should probably proofread and complete it, then remove this comment. -->
|
|
17 |
# he
|
18 |
|
19 |
This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on an unknown dataset.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
20 |
|
21 |
## Model description
|
22 |
|
@@ -42,9 +60,20 @@ The following hyperparameters were used during training:
|
|
42 |
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
|
43 |
- lr_scheduler_type: linear
|
44 |
- lr_scheduler_warmup_steps: 20
|
45 |
-
- training_steps:
|
46 |
- mixed_precision_training: Native AMP
|
47 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
48 |
### Framework versions
|
49 |
|
50 |
- Transformers 4.36.2
|
|
|
6 |
tags:
|
7 |
- hf-asr-leaderboard
|
8 |
- generated_from_trainer
|
9 |
+
metrics:
|
10 |
+
- precision
|
11 |
+
- recall
|
12 |
+
- f1
|
13 |
model-index:
|
14 |
- name: he
|
15 |
results: []
|
|
|
21 |
# he
|
22 |
|
23 |
This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on an unknown dataset.
|
24 |
+
It achieves the following results on the evaluation set:
|
25 |
+
- Loss: 1.1249
|
26 |
+
- Precision: 0.2559
|
27 |
+
- Recall: 0.2742
|
28 |
+
- F1: 0.2642
|
29 |
+
- Precision Median: 0.25
|
30 |
+
- Recall Median: 0.2727
|
31 |
+
- F1 Median: 0.2609
|
32 |
+
- Precision Max: 0.4211
|
33 |
+
- Recall Max: 0.4706
|
34 |
+
- F1 Max: 0.4444
|
35 |
+
- Precision Min: 0.0526
|
36 |
+
- Recall Min: 0.05
|
37 |
+
- F1 Min: 0.0513
|
38 |
|
39 |
## Model description
|
40 |
|
|
|
60 |
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
|
61 |
- lr_scheduler_type: linear
|
62 |
- lr_scheduler_warmup_steps: 20
|
63 |
+
- training_steps: 25
|
64 |
- mixed_precision_training: Native AMP
|
65 |
|
66 |
+
### Training results
|
67 |
+
|
68 |
+
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Precision Median | Recall Median | F1 Median | Precision Max | Recall Max | F1 Max | Precision Min | Recall Min | F1 Min |
|
69 |
+
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:----------------:|:-------------:|:---------:|:-------------:|:----------:|:------:|:-------------:|:----------:|:------:|
|
70 |
+
| No log | 0.0 | 5 | 5.8029 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0 | 0 | 0 | 0 | 0 | 0 |
|
71 |
+
| No log | 0.01 | 10 | 2.9216 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0 | 0 | 0 | 0 | 0 | 0 |
|
72 |
+
| No log | 0.01 | 15 | 2.1863 | 0.0540 | 0.0710 | 0.0609 | 0.0476 | 0.0769 | 0.0541 | 0.2 | 0.2353 | 0.2162 | 0.0 | 0.0 | 0.0 |
|
73 |
+
| No log | 0.01 | 20 | 1.1833 | 0.3468 | 0.3692 | 0.3565 | 0.4118 | 0.4286 | 0.4242 | 0.8 | 0.8 | 0.8000 | 0.0 | 0.0 | 0.0 |
|
74 |
+
| 3.5014 | 0.01 | 25 | 1.1249 | 0.2559 | 0.2742 | 0.2642 | 0.25 | 0.2727 | 0.2609 | 0.4211 | 0.4706 | 0.4444 | 0.0526 | 0.05 | 0.0513 |
|
75 |
+
|
76 |
+
|
77 |
### Framework versions
|
78 |
|
79 |
- Transformers 4.36.2
|
model.safetensors
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 3055671280
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:f54d42f49d8b09dd8b31b9c978a06fcf8845df57976a09f0a437e88625dc530e
|
3 |
size 3055671280
|
runs/Feb14_22-51-17_sipl-7542-ct/events.out.tfevents.1707951078.sipl-7542-ct.1177613.3
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:67a11e54dc1073cb4fad02394880af6bcf9e45ae7bc260731442c5b720d10347
|
3 |
+
size 5309
|
runs/Feb14_22-58-50_sipl-7542-ct/events.out.tfevents.1707951535.sipl-7542-ct.1185350.0
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:80e147276004fe3e3dab20aaebfc8cfd811b264fb7b74b9fb751b151d1047c63
|
3 |
+
size 4136
|
runs/Feb14_23-00-02_sipl-7542-ct/events.out.tfevents.1707951604.sipl-7542-ct.1185350.1
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:371f58a6fb69357725132622a4b231f6d1c5c9858b8cdcee6f134dc6777559f3
|
3 |
+
size 4136
|
runs/Feb14_23-02-06_sipl-7542-ct/events.out.tfevents.1707951731.sipl-7542-ct.1186316.0
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:9394630085c2583a5d1447049e13e40565d3461a06ba8259fa9e384e116b9d8e
|
3 |
+
size 10282
|
runs/Feb15_06-05-10_sipl-7542-ct/events.out.tfevents.1707977115.sipl-7542-ct.1319778.0
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:ab9bff1cf65c72f5ffe5cb8b214ed46e7a7c3834633412b5be2e9bf9cf8ac651
|
3 |
+
size 21504
|
training_args.bin
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 4475
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:72d81e4dee5fa4935124c935ad0735bba7504a6a2a97c04edab02cab35debb0d
|
3 |
size 4475
|