metadata
tags:
- generated_from_trainer
- summarization
datasets:
- samsum
model-index:
- name: pegasus-samsum
results:
- task:
type: summarization
name: Summarization
dataset:
name: xsum
type: xsum
config: default
split: test
metrics:
- type: rouge
value: 21.9916
name: ROUGE-1
verified: true
verifyToken: >-
eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMGM1NDhjZDQ0M2ZjZTgxMTg1OWM4NjdmMGI4ODQyNTU2OTQ0OTY2ODc4MWNhNzU4MWM2NTBhZTQ1YzdlMTllMSIsInZlcnNpb24iOjF9.JZWxpX_rKcfTW-QhTsI_TqhL8GdENnfRXPpB8P4W0u3VSS4WC133IqeP9SvD8gWXxeKxYEcjwRDd864v46kBCA
- type: rouge
value: 4.4258
name: ROUGE-2
verified: true
verifyToken: >-
eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiODE4ZjM2ZTg2MmRlZTYwNmY4NTU2OWUwODhiODI3ZDQ2MmFkNGI0MGRlOTBlYmJhZTVjNzM2Y2E4ZTBjMjBhNCIsInZlcnNpb24iOjF9.4KVR6h7JIHwDX0EbzS503EhVKIFY7MhMU9xLm5OzboYKRnI_8UrjijIVGkT68k20LdFfcVv2oddtcZkrM0CcBw
- type: rouge
value: 14.9274
name: ROUGE-L
verified: true
verifyToken: >-
eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMGU0ZTRhODdlYWNhOGU5YmRkNjU4YWU1MDlhOWZjODI1NGEzMDZkYWI4ZGE0YzBmYTk3ZTljOWM2MGI1NTQ5YiIsInZlcnNpb24iOjF9.0IwWROcDNkQOJ4reQd0zpjyMAeTV5_SG5mydpQZlTVzLlt1SpC6BhitaQFa3_eXcJ_H91JUjJIz9Gjqcow7KDQ
- type: rouge
value: 16.6553
name: ROUGE-LSUM
verified: true
verifyToken: >-
eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZGQ2YmFiZWU4ZGY0NzZiNjk1NjMxOTk4NzI0MTU1MDIwNjdmMGYzMWJiYWJhZTMzZjU3NWNiMDVhNTlmYWNmOCIsInZlcnNpb24iOjF9.GjiA4KFUWw1IWn5q1qRWo-B4BEg04xAe0Lkoi_2zlFcyDoJZUgk1_VHzpmK1F72NO5a55lE-GiMRRxJcES5vCA
- type: loss
value: 2.3909709453582764
name: loss
verified: true
verifyToken: >-
eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYzMxYTE4ZWUxZWU4ZGZhNWY0ZjA2ZGU2ZTc5N2FjMjA0ZjI5YTFmMzZmMGI0MDEyMGYzZTllM2I2ZWIxZGEwZSIsInZlcnNpb24iOjF9.BnmMbMl2n3cGwkb6C5BF7mesa6WmFp0EcoqYCdkIdXYdM8Jo2svmaUz-JniLLGVuj-UoAyvUfYQe1dXPPPZcBg
- type: gen_len
value: 54.1562
name: gen_len
verified: true
verifyToken: >-
eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNWJiNmZmMTIyNmNjOTExYzYyNmI5OGVjMTQwOGY3NDk1YTUwYzdlZWE4YmMwYWVmOTE0Mjg3MTEzYjEzMTgwNiIsInZlcnNpb24iOjF9.IisBh3s_yDr6nF-VwobRo8Fe63Qc8Ku_2KgNfjwv16yEPQd5kX8rCg056rT_DhtXzxTxNRMyXLKCodIewbaPAQ
pegasus-samsum
This model is a fine-tuned version of google/pegasus-cnn_dailymail on the samsum dataset. It achieves the following results on the evaluation set:
- Loss: 1.4807
Model description
Generates dialogue summaries based on the samsum dataset.
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 15
- total_train_batch_size: 15
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
Training results
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
1.5421 | 0.51 | 500 | 1.4807 |
Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1