Edit model card

text_summarization-finetuned-stocknews_1900_100

This model is a fine-tuned version of Falconsai/text_summarization on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 1.6071
  • Rouge1: 15.4764
  • Rouge2: 7.3425
  • Rougel: 13.0298
  • Rougelsum: 14.3613
  • Gen Len: 19.0

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 40
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Rouge1 Rouge2 Rougel Rougelsum Gen Len
No log 1.0 102 1.5996 15.7162 7.3225 13.1679 14.5316 19.0
No log 2.0 204 1.5991 15.7364 7.3916 13.2205 14.5865 19.0
No log 3.0 306 1.5948 15.7337 7.4936 13.2031 14.5941 19.0
No log 4.0 408 1.5935 15.7661 7.4892 13.1138 14.5123 19.0
1.4093 5.0 510 1.5972 15.6328 7.2837 13.1138 14.4789 19.0
1.4093 6.0 612 1.6016 15.5382 7.3117 13.0203 14.3907 19.0
1.4093 7.0 714 1.5983 15.5582 7.2532 12.9421 14.3971 19.0
1.4093 8.0 816 1.6039 15.5287 7.3152 13.002 14.3652 19.0
1.4093 9.0 918 1.6016 15.5916 7.3367 13.0811 14.442 19.0
1.3525 10.0 1020 1.6017 15.749 7.6355 13.1754 14.6339 19.0
1.3525 11.0 1122 1.5992 15.6529 7.5216 13.1041 14.5668 19.0
1.3525 12.0 1224 1.5977 15.64 7.3843 13.0609 14.5366 19.0
1.3525 13.0 1326 1.5993 15.6516 7.4595 13.1143 14.5799 19.0
1.3525 14.0 1428 1.6040 15.6532 7.5787 13.0764 14.5464 19.0
1.3156 15.0 1530 1.5998 15.4999 7.349 13.016 14.4233 19.0
1.3156 16.0 1632 1.6039 15.4718 7.2392 12.9167 14.3196 19.0
1.3156 17.0 1734 1.6026 15.5434 7.376 12.9885 14.3673 19.0
1.3156 18.0 1836 1.6008 15.4092 7.2119 12.9495 14.286 19.0
1.3156 19.0 1938 1.6009 15.4604 7.4049 13.0264 14.3634 19.0
1.2849 20.0 2040 1.6028 15.4735 7.3749 12.9979 14.3637 19.0
1.2849 21.0 2142 1.6025 15.617 7.5495 13.0912 14.4945 19.0
1.2849 22.0 2244 1.6061 15.65 7.6043 13.119 14.5419 19.0
1.2849 23.0 2346 1.6039 15.5747 7.5283 13.0601 14.4706 19.0
1.2849 24.0 2448 1.6071 15.4923 7.4246 12.9747 14.3495 19.0
1.2625 25.0 2550 1.6030 15.5403 7.4373 13.1005 14.4791 19.0
1.2625 26.0 2652 1.6044 15.5232 7.4625 13.049 14.4455 19.0
1.2625 27.0 2754 1.6038 15.4961 7.4241 13.0409 14.4496 19.0
1.2625 28.0 2856 1.6048 15.5079 7.551 13.0814 14.4369 19.0
1.2625 29.0 2958 1.6067 15.4629 7.4087 13.0123 14.3897 19.0
1.2418 30.0 3060 1.6052 15.5104 7.518 13.0891 14.4284 19.0
1.2418 31.0 3162 1.6051 15.5104 7.4773 13.0686 14.4114 19.0
1.2418 32.0 3264 1.6044 15.5491 7.5342 13.1145 14.4742 19.0
1.2418 33.0 3366 1.6064 15.5321 7.4773 13.0686 14.4336 19.0
1.2418 34.0 3468 1.6055 15.5193 7.5178 13.0887 14.4521 19.0
1.2313 35.0 3570 1.6057 15.4739 7.4526 13.0326 14.3947 19.0
1.2313 36.0 3672 1.6057 15.4486 7.3244 12.9881 14.3346 19.0
1.2313 37.0 3774 1.6067 15.4764 7.3795 13.0402 14.3886 19.0
1.2313 38.0 3876 1.6072 15.4594 7.3028 12.9813 14.3339 19.0
1.2313 39.0 3978 1.6070 15.4764 7.3795 13.0402 14.3886 19.0
1.2274 40.0 4080 1.6071 15.4764 7.3425 13.0298 14.3613 19.0

Framework versions

  • Transformers 4.38.2
  • Pytorch 2.1.0+cu121
  • Datasets 2.18.0
  • Tokenizers 0.15.2
Downloads last month
3
Safetensors
Model size
60.5M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for dhiya96/text_summarization-finetuned-stocknews_1900_100

Finetuned
(19)
this model