bart-base-asqa-cb

This model is a fine-tuned version of facebook/bart-base on the ASQA dataset. It achieves the following results on the evaluation set:

  • Loss: 2.7878
  • Rougelsum: 36.5701

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-06
  • train_batch_size: 16
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 20
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Rougelsum
No log 1.0 273 2.9082 35.2452
3.4369 2.0 546 2.8642 35.9217
3.4369 3.0 819 2.8426 35.9304
3.1616 4.0 1092 2.8310 36.2562
3.1616 5.0 1365 2.8193 36.4633
3.0814 6.0 1638 2.8091 36.6044
3.0814 7.0 1911 2.8069 36.6191
3.0165 8.0 2184 2.8026 36.6380
3.0165 9.0 2457 2.7978 36.6962
2.9724 10.0 2730 2.7965 36.5703
2.9282 11.0 3003 2.7926 36.5339
2.9282 12.0 3276 2.7916 36.5093
2.8996 13.0 3549 2.7911 36.4693
2.8996 14.0 3822 2.7904 36.3852
2.8803 15.0 4095 2.7888 36.6173
2.8803 16.0 4368 2.7881 36.5282
2.8653 17.0 4641 2.7885 36.6131
2.8653 18.0 4914 2.7878 36.6120
2.8558 19.0 5187 2.7877 36.5637
2.8558 20.0 5460 2.7878 36.5701

Framework versions

  • Transformers 4.23.0.dev0
  • Pytorch 1.12.1+cu102
  • Datasets 2.4.0
  • Tokenizers 0.12.1
Downloads last month
23
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for din0s/bart-base-asqa-cb

Base model

facebook/bart-base
Finetuned
(368)
this model

Dataset used to train din0s/bart-base-asqa-cb