t5_recommendation_sports_equipment_english

This model is a fine-tuned version of t5-large on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.4968
  • Rouge1: 69.8413
  • Rouge2: 61.9048
  • Rougel: 69.8413
  • Rougelsum: 70.2381
  • Gen Len: 4.2381

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 16
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 10

Training results

Training Loss Epoch Step Validation Loss Rouge1 Rouge2 Rougel Rougelsum Gen Len
No log 0.96 6 7.0208 13.5224 1.8519 13.7870 13.5032 18.7143
No log 1.92 12 1.8113 20.4762 14.2857 20.4762 20.9524 3.6667
No log 2.88 18 0.7760 23.8095 4.7619 23.3333 23.3333 4.1429
No log 4.0 25 0.5784 38.4127 23.8095 38.8889 39.9206 4.0476
No log 4.96 31 0.5181 54.1270 42.8571 54.8413 54.6825 3.9524
No log 5.92 37 0.4786 62.6984 52.3810 62.6984 62.6984 3.9048
No log 6.88 43 0.4605 64.2857 52.3810 64.6032 64.6032 4.2857
No log 8.0 50 0.6243 67.4603 57.1429 67.4603 67.4603 4.3810
No log 8.96 56 0.5484 64.2857 57.1429 65.0794 65.0794 4.1429
No log 9.6 60 0.4968 69.8413 61.9048 69.8413 70.2381 4.2381

Framework versions

  • Transformers 4.36.2
  • Pytorch 2.1.2+cu121
  • Datasets 2.16.1
  • Tokenizers 0.15.0
Downloads last month
18
Safetensors
Model size
738M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for navkar98/t5_recommendation_sports_equipment_english

Base model

google-t5/t5-large
Finetuned
(73)
this model