fahadqazi commited on
Commit
065e570
·
verified ·
1 Parent(s): 877c1a3

End of training

Browse files
README.md ADDED
@@ -0,0 +1,67 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ language:
4
+ - sd
5
+ base_model: Bhuvana/t5-base-spellchecker
6
+ tags:
7
+ - generated_from_trainer
8
+ datasets:
9
+ - allenai/madlad-400
10
+ model-index:
11
+ - name: T5 Sindhi Spell Checker - Fahad Maqsood Qazi
12
+ results: []
13
+ ---
14
+
15
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
16
+ should probably proofread and complete it, then remove this comment. -->
17
+
18
+ # T5 Sindhi Spell Checker - Fahad Maqsood Qazi
19
+
20
+ This model is a fine-tuned version of [Bhuvana/t5-base-spellchecker](https://huggingface.co/Bhuvana/t5-base-spellchecker) on the MadLad 400 dataset.
21
+ It achieves the following results on the evaluation set:
22
+ - eval_loss: 0.0169
23
+ - eval_rouge1: 23.0373
24
+ - eval_rouge2: 18.6848
25
+ - eval_rougeL: 23.0844
26
+ - eval_rougeLsum: 23.1065
27
+ - eval_gen_len: 16.9753
28
+ - eval_runtime: 75.0358
29
+ - eval_samples_per_second: 39.981
30
+ - eval_steps_per_second: 0.493
31
+ - epoch: 0.2210
32
+ - step: 16000
33
+
34
+ ## Model description
35
+
36
+ More information needed
37
+
38
+ ## Intended uses & limitations
39
+
40
+ More information needed
41
+
42
+ ## Training and evaluation data
43
+
44
+ More information needed
45
+
46
+ ## Training procedure
47
+
48
+ ### Training hyperparameters
49
+
50
+ The following hyperparameters were used during training:
51
+ - learning_rate: 2e-05
52
+ - train_batch_size: 8
53
+ - eval_batch_size: 82
54
+ - seed: 42
55
+ - gradient_accumulation_steps: 8
56
+ - total_train_batch_size: 64
57
+ - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
58
+ - lr_scheduler_type: linear
59
+ - training_steps: 30000
60
+ - mixed_precision_training: Native AMP
61
+
62
+ ### Framework versions
63
+
64
+ - Transformers 4.47.1
65
+ - Pytorch 2.5.1+cu121
66
+ - Datasets 3.2.0
67
+ - Tokenizers 0.21.0
generation_config.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "decoder_start_token_id": 0,
3
+ "eos_token_id": 1,
4
+ "pad_token_id": 0,
5
+ "transformers_version": "4.47.1"
6
+ }
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:00398b8e17ee2c9eec34712bbfda41eb14e809e703b3c2376941d87490dfb0b1
3
  size 891644712
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a0aa8490b19c8a3f79db4816f3188e3b8cc04574909ce2b0ef918b3c3952e30f
3
  size 891644712
runs/Jan08_01-14-50_7f5b66f0dc69/events.out.tfevents.1736298906.7f5b66f0dc69.4539.12 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:76cd7de36f374dbc9e9bc6e23d2a707bf6d61eea7c5538000c3bf18ea775ca04
3
- size 149480
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:be40f69d2d5125c611b675c9fb5736cf253299d60f58856790361e73cea70f52
3
+ size 150746