AbdullahAlnemr1 commited on
Commit
8a8b5a2
·
verified ·
1 Parent(s): e0aea08

Create README.md

Browse files

---
language: en
datasets:
- cnn_dailymail
tags:
- summarization
- t5
- flan-t5
- transformers
- huggingface
- fine-tuned
license: apache-2.0
model-index:
- name: FLAN-T5 Base Fine-Tuned on CNN/DailyMail
results:
- task:
type: summarization
name: Summarization
dataset:
name: CNN/DailyMail
type: cnn_dailymail
metrics:
- type: rouge
value: 25.33
name: Rouge-1
- type: rouge
value: 11.96
name: Rouge-2
- type: rouge
value: 20.68
name: Rouge-L
---

# FLAN-T5 Base Fine-Tuned on CNN/DailyMail

This model is a fine-tuned version of [`google/flan-t5-base`](https://huggingface.co/google/flan-t5-base) on the [CNN/DailyMail](https://huggingface.co/datasets/cnn_dailymail) dataset using the Hugging Face Transformers library.

## 📝 Task

**Abstractive Summarization**: Given a news article, generate a concise summary.

---

## 📊 Evaluation Results

The model was fine-tuned on 20,000 training samples and validated/tested on 2,000 samples. Evaluation was performed using ROUGE metrics:

| Metric | Score |
|-------------|--------|
| ROUGE-1 | 25.33 |
| ROUGE-2 | 11.96 |
| ROUGE-L | 20.68 |
| ROUGE-Lsum | 23.81 |

---

## 📦 Usage

```python
from transformers import T5Tokenizer, T5ForConditionalGeneration

model = T5ForConditionalGeneration.from_pretrained("AbdullahAlnemr1/flan-t5-summarizer")
tokenizer = T5Tokenizer.from_pretrained("AbdullahAlnemr1/flan-t5-summarizer")

input_text = "summarize: The US president met with the Senate to discuss..."
inputs = tokenizer(input_text, return_tensors="pt", max_length=512, truncation=True)

summary_ids = model.generate(inputs["input_ids"], max_length=128, num_beams=4, early_stopping=True)
print(tokenizer.decode(summary_ids[0], skip_special_tokens=True))

Files changed (1) hide show
  1. README.md +0 -0
README.md ADDED
File without changes