pegasus-samsum
This model is a fine-tuned version of google/pegasus-cnn_dailymail on SAMSum dataset. It achieves the following results on the evaluation set:
- Loss: 1.3839
Intended uses & limitations
Intended uses:
- Dialogue summarization (e.g., chat logs, meetings)
- Text summarization for conversational datasets
Limitations:
- May struggle with very long conversations or non-dialogue text.
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 2
Training results
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
1.6026 | 0.5431 | 500 | 1.4875 |
1.4737 | 1.0861 | 1000 | 1.4040 |
1.4735 | 1.6292 | 1500 | 1.3839 |
Test results
rouge1 | rouge2 | rougeL | rougeLsum Loss |
---|---|---|---|
0.427614 | 0.200571 | 0.340648 | 0.340738 |
How to use
You can use this model with the transformers library for dialogue summarization. Here's an example in Python:
from transformers import pipeline
import torch
device = 0 if torch.cuda.is_available() else -1
pipe = pipeline("summarization",
model="seddiktrk/pegasus-samsum",
device=device)
custom_dialogue = """\
Seddik: Hey, have you tried using PEGASUS for summarization?
John: Yeah, I just started experimenting with it last week!
Seddik: It's pretty powerful, especially for abstractive summaries.
John: I agree! The results are really impressive.
Seddik: I was thinking of using it for my next project. Want to collaborate?
John: Absolutely! We could make some awesome improvements together.
Seddik: Perfect, let's brainstorm ideas this weekend.
John: Sounds like a plan!
"""
# Summarize dialogue
gen_kwargs = {"length_penalty": 0.8, "num_beams":8, "max_length": 128}
print(pipe(custom_dialogue, **gen_kwargs)[0]["summary_text"])
Example Output
John started using PEG for summarization last week. Seddik is thinking of using it for his next project.
John and Seddik will brainstorm ideas this weekend.
Framework versions
- Transformers 4.44.0
- Pytorch 2.4.0
- Datasets 2.21.0
- Tokenizers 0.19.1
- Downloads last month
- 8
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for seddiktrk/pegasus-samsum
Base model
google/pegasus-cnn_dailymail