Max200293 commited on
Commit
b323c37
·
1 Parent(s): b02fbe9

wav2vec2-efeat-300m-norwegian-colab

Browse files
README.md CHANGED
@@ -22,7 +22,7 @@ model-index:
22
  metrics:
23
  - name: Wer
24
  type: wer
25
- value: 1.0
26
  ---
27
 
28
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -32,8 +32,8 @@ should probably proofread and complete it, then remove this comment. -->
32
 
33
  This model is a fine-tuned version of [facebook/wav2vec2-lv-60-espeak-cv-ft](https://huggingface.co/facebook/wav2vec2-lv-60-espeak-cv-ft) on the nb_samtale dataset.
34
  It achieves the following results on the evaluation set:
35
- - Loss: nan
36
- - Wer: 1.0
37
 
38
  ## Model description
39
 
@@ -53,11 +53,11 @@ More information needed
53
 
54
  The following hyperparameters were used during training:
55
  - learning_rate: 0.0003
56
- - train_batch_size: 8
57
  - eval_batch_size: 8
58
  - seed: 42
59
  - gradient_accumulation_steps: 2
60
- - total_train_batch_size: 16
61
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
62
  - lr_scheduler_type: linear
63
  - lr_scheduler_warmup_steps: 500
@@ -68,9 +68,7 @@ The following hyperparameters were used during training:
68
 
69
  | Training Loss | Epoch | Step | Validation Loss | Wer |
70
  |:-------------:|:-----:|:----:|:---------------:|:------:|
71
- | 1200.6574 | 1.29 | 400 | inf | 1.0000 |
72
- | 929.0892 | 2.57 | 800 | inf | 1.0000 |
73
- | 1095.0877 | 3.86 | 1200 | nan | 1.0 |
74
 
75
 
76
  ### Framework versions
 
22
  metrics:
23
  - name: Wer
24
  type: wer
25
+ value: 2.027646033483319
26
  ---
27
 
28
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
32
 
33
  This model is a fine-tuned version of [facebook/wav2vec2-lv-60-espeak-cv-ft](https://huggingface.co/facebook/wav2vec2-lv-60-espeak-cv-ft) on the nb_samtale dataset.
34
  It achieves the following results on the evaluation set:
35
+ - Loss: 743.7425
36
+ - Wer: 2.0276
37
 
38
  ## Model description
39
 
 
53
 
54
  The following hyperparameters were used during training:
55
  - learning_rate: 0.0003
56
+ - train_batch_size: 16
57
  - eval_batch_size: 8
58
  - seed: 42
59
  - gradient_accumulation_steps: 2
60
+ - total_train_batch_size: 32
61
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
62
  - lr_scheduler_type: linear
63
  - lr_scheduler_warmup_steps: 500
 
68
 
69
  | Training Loss | Epoch | Step | Validation Loss | Wer |
70
  |:-------------:|:-----:|:----:|:---------------:|:------:|
71
+ | 1239.2848 | 2.57 | 400 | 743.7425 | 2.0276 |
 
 
72
 
73
 
74
  ### Framework versions
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:22dbd2bf3be4748ed7aaaa3697e690356d571b8f2ea6ce051df1b3c69f155fb0
3
  size 1263414696
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f9bfccbf2fc9ada47efbb9f8b7bb5ddedc7ee0a08f1d7503cc43df5a61b8633e
3
  size 1263414696
runs/Nov22_15-46-04_e3d53f6ffcdd/events.out.tfevents.1700668162.e3d53f6ffcdd.634.0 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:2d2f9b85c5d47735499c8abe3cdb523f73f9d753d04c930195bcc628eb5802dd
3
- size 6688
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ae86c0af0fc4a9e8c6cd07b56dc52de520ded851e15d449443bce083a3b686b5
3
+ size 7042