gjyotk commited on
Commit
da7b8be
1 Parent(s): 66d94b2

End of training

Browse files
README.md CHANGED
@@ -17,11 +17,11 @@ should probably proofread and complete it, then remove this comment. -->
17
 
18
  This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on an unknown dataset.
19
  It achieves the following results on the evaluation set:
20
- - Loss: 1.4794
21
- - Rouge1: 0.3292
22
- - Rouge2: 0.1311
23
- - Rougel: 0.2767
24
- - Rougelsum: 0.2770
25
 
26
  ## Model description
27
 
@@ -52,16 +52,16 @@ The following hyperparameters were used during training:
52
 
53
  | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
54
  |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|
55
- | No log | 1.0 | 63 | 1.7278 | 0.3184 | 0.1061 | 0.2644 | 0.2643 |
56
- | No log | 2.0 | 126 | 1.5746 | 0.3146 | 0.1271 | 0.2676 | 0.2678 |
57
- | No log | 3.0 | 189 | 1.4806 | 0.3135 | 0.1268 | 0.2764 | 0.2773 |
58
- | No log | 4.0 | 252 | 1.4676 | 0.3217 | 0.1246 | 0.2700 | 0.2703 |
59
- | No log | 5.0 | 315 | 1.4794 | 0.3292 | 0.1311 | 0.2767 | 0.2770 |
60
 
61
 
62
  ### Framework versions
63
 
64
- - Transformers 4.38.2
65
- - Pytorch 2.1.0+cu121
66
  - Datasets 2.18.0
67
  - Tokenizers 0.15.2
 
17
 
18
  This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on an unknown dataset.
19
  It achieves the following results on the evaluation set:
20
+ - Loss: 1.1086
21
+ - Rouge1: 0.4524
22
+ - Rouge2: 0.2784
23
+ - Rougel: 0.4139
24
+ - Rougelsum: 0.4127
25
 
26
  ## Model description
27
 
 
52
 
53
  | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
54
  |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|
55
+ | No log | 1.0 | 67 | 1.4667 | 0.3701 | 0.1753 | 0.3313 | 0.3302 |
56
+ | No log | 2.0 | 134 | 1.2361 | 0.4050 | 0.2154 | 0.3667 | 0.3662 |
57
+ | No log | 3.0 | 201 | 1.1369 | 0.4512 | 0.2739 | 0.3999 | 0.4001 |
58
+ | No log | 4.0 | 268 | 1.0996 | 0.4576 | 0.2916 | 0.4148 | 0.4137 |
59
+ | No log | 5.0 | 335 | 1.1086 | 0.4524 | 0.2784 | 0.4139 | 0.4127 |
60
 
61
 
62
  ### Framework versions
63
 
64
+ - Transformers 4.39.0
65
+ - Pytorch 2.2.1+cu121
66
  - Datasets 2.18.0
67
  - Tokenizers 0.15.2
config.json CHANGED
@@ -56,7 +56,7 @@
56
  },
57
  "tie_word_embeddings": false,
58
  "torch_dtype": "float32",
59
- "transformers_version": "4.38.2",
60
  "use_cache": true,
61
  "vocab_size": 32128
62
  }
 
56
  },
57
  "tie_word_embeddings": false,
58
  "torch_dtype": "float32",
59
+ "transformers_version": "4.39.0",
60
  "use_cache": true,
61
  "vocab_size": 32128
62
  }
generation_config.json CHANGED
@@ -2,5 +2,5 @@
2
  "decoder_start_token_id": 0,
3
  "eos_token_id": 1,
4
  "pad_token_id": 0,
5
- "transformers_version": "4.38.2"
6
  }
 
2
  "decoder_start_token_id": 0,
3
  "eos_token_id": 1,
4
  "pad_token_id": 0,
5
+ "transformers_version": "4.39.0"
6
  }
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:d41328a0da7490afc35e0cc7c950adb08e016ed9e73eae8e6bbf953360e79133
3
  size 990345064
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:61d8217e5103d34028a3e7d1a69ee86baf25b3f38b0831a2346cd6dd99eb6ca2
3
  size 990345064
runs/Mar22_14-48-13_6f5dd144bc90/events.out.tfevents.1711118895.6f5dd144bc90.679.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b122b4f4656b03b2a60e4a0be65941b20c4cf8f9dc4c0fb56d699aa2e1122c7f
3
+ size 8326
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:3eed0ba803998d6c022b472f18b6c09a83280eef2ee61bc5b3cfbd3e0a006dfd
3
  size 5048
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1c28c85a898075aef7cf4bce9f70c909148815e6fdea1f29a4efa8de49cd3898
3
  size 5048