ayymen commited on
Commit
fcf6da7
·
verified ·
1 Parent(s): 2366724

Model save

Browse files
Files changed (2) hide show
  1. README.md +74 -0
  2. model.safetensors +1 -1
README.md ADDED
@@ -0,0 +1,74 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ license: mit
4
+ base_model: facebook/w2v-bert-2.0
5
+ tags:
6
+ - generated_from_trainer
7
+ metrics:
8
+ - wer
9
+ model-index:
10
+ - name: w2v-bert-2.0-chichewa_34h
11
+ results: []
12
+ ---
13
+
14
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
15
+ should probably proofread and complete it, then remove this comment. -->
16
+
17
+ # w2v-bert-2.0-chichewa_34h
18
+
19
+ This model is a fine-tuned version of [facebook/w2v-bert-2.0](https://huggingface.co/facebook/w2v-bert-2.0) on an unknown dataset.
20
+ It achieves the following results on the evaluation set:
21
+ - Loss: 0.4660
22
+ - Wer: 0.3855
23
+ - Cer: 0.1101
24
+
25
+ ## Model description
26
+
27
+ More information needed
28
+
29
+ ## Intended uses & limitations
30
+
31
+ More information needed
32
+
33
+ ## Training and evaluation data
34
+
35
+ More information needed
36
+
37
+ ## Training procedure
38
+
39
+ ### Training hyperparameters
40
+
41
+ The following hyperparameters were used during training:
42
+ - learning_rate: 3e-05
43
+ - train_batch_size: 32
44
+ - eval_batch_size: 32
45
+ - seed: 42
46
+ - gradient_accumulation_steps: 2
47
+ - total_train_batch_size: 64
48
+ - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
49
+ - lr_scheduler_type: linear
50
+ - lr_scheduler_warmup_ratio: 0.1
51
+ - training_steps: 100000
52
+ - mixed_precision_training: Native AMP
53
+
54
+ ### Training results
55
+
56
+ | Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
57
+ |:-------------:|:-------:|:----:|:---------------:|:------:|:------:|
58
+ | 0.4609 | 5.6197 | 1000 | 0.7327 | 0.6746 | 0.1953 |
59
+ | 0.1207 | 11.2366 | 2000 | 0.4130 | 0.4797 | 0.1341 |
60
+ | 0.1104 | 16.8563 | 3000 | 0.3404 | 0.4165 | 0.1182 |
61
+ | 0.0417 | 22.4732 | 4000 | 0.3389 | 0.4046 | 0.1149 |
62
+ | 0.0849 | 28.0901 | 5000 | 0.3593 | 0.3860 | 0.1110 |
63
+ | 0.0169 | 33.7099 | 6000 | 0.4053 | 0.3799 | 0.1086 |
64
+ | 0.0625 | 39.3268 | 7000 | 0.4394 | 0.3820 | 0.1103 |
65
+ | 0.0226 | 44.9465 | 8000 | 0.4477 | 0.3922 | 0.1099 |
66
+ | 0.0236 | 50.5634 | 9000 | 0.4660 | 0.3855 | 0.1101 |
67
+
68
+
69
+ ### Framework versions
70
+
71
+ - Transformers 4.48.1
72
+ - Pytorch 2.6.0+cu124
73
+ - Datasets 3.5.0
74
+ - Tokenizers 0.21.1
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:ad01cc585b591afa3e8fcba5162c1d816072eeb25710e3637e9230689563ef38
3
  size 2423064660
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:62cdd09a829212acee2b4968aa60a23d7b5bbba72d38fd83c90b4b60fa1c76cf
3
  size 2423064660