Update README.md
Browse files
README.md
CHANGED
|
@@ -1,7 +1,7 @@
|
|
| 1 |
---
|
| 2 |
language:
|
| 3 |
- es
|
| 4 |
-
license:
|
| 5 |
multilinguality:
|
| 6 |
- monolingual
|
| 7 |
size_categories:
|
|
@@ -41,9 +41,9 @@ tags:
|
|
| 41 |
<img src="https://huggingface.co/datasets/Iker/NoticIA/resolve/main/assets/logo.png" style="height: 250px;">
|
| 42 |
</p>
|
| 43 |
|
| 44 |
-
<h3 align="center">"A
|
| 45 |
|
| 46 |
-
This repository contains the manual annotations from a second human
|
| 47 |
|
| 48 |
The full NoticIA dataset is available here: [https://huggingface.co/datasets/Iker/NoticIA](https://huggingface.co/datasets/Iker/NoticIA)
|
| 49 |
|
|
@@ -84,13 +84,15 @@ This dataset is intended to build models tailored for academic research that can
|
|
| 84 |
# Out-of-Scope Use
|
| 85 |
You cannot use this dataset to develop systems that directly harm the newspapers included in the dataset. This includes using the dataset to train profit-oriented LLMs capable of generating articles from a short text or headline, as well as developing profit-oriented bots that automatically summarize articles without the permission of the article's owner. Additionally, you are not permitted to train a system with this dataset that generates clickbait headlines.
|
| 86 |
|
|
|
|
|
|
|
| 87 |
# Dataset Creation
|
| 88 |
The dataset has been meticulously created by hand. We utilize two sources to compile Clickbait articles:
|
| 89 |
- The Twitter user [@ahorrandoclick1](https://twitter.com/ahorrandoclick1), who reposts Clickbait articles along with a hand-crafted summary. Although we use their summaries as a reference, most of them have been rewritten (750 examples from this source).
|
| 90 |
- The web demo [⚔️ClickbaitFighter⚔️](https://iker-clickbaitfighter.hf.space/), which operates a pre-trained model using an early iteration of our dataset. We collect all the model inputs/outputs and manually correct them (100 examples from this source).
|
| 91 |
|
| 92 |
# Who are the annotators?
|
| 93 |
-
The dataset was
|
| 94 |
The annotation took ~40 hours.
|
| 95 |
|
| 96 |
|
|
|
|
| 1 |
---
|
| 2 |
language:
|
| 3 |
- es
|
| 4 |
+
license: cc-by-nc-sa-4.0
|
| 5 |
multilinguality:
|
| 6 |
- monolingual
|
| 7 |
size_categories:
|
|
|
|
| 41 |
<img src="https://huggingface.co/datasets/Iker/NoticIA/resolve/main/assets/logo.png" style="height: 250px;">
|
| 42 |
</p>
|
| 43 |
|
| 44 |
+
<h3 align="center">"A Clickbait Article Summarization Dataset in Spanish."</h3>
|
| 45 |
|
| 46 |
+
This repository contains the manual annotations from a second human to validate the test set of the NoticIA dataset.
|
| 47 |
|
| 48 |
The full NoticIA dataset is available here: [https://huggingface.co/datasets/Iker/NoticIA](https://huggingface.co/datasets/Iker/NoticIA)
|
| 49 |
|
|
|
|
| 84 |
# Out-of-Scope Use
|
| 85 |
You cannot use this dataset to develop systems that directly harm the newspapers included in the dataset. This includes using the dataset to train profit-oriented LLMs capable of generating articles from a short text or headline, as well as developing profit-oriented bots that automatically summarize articles without the permission of the article's owner. Additionally, you are not permitted to train a system with this dataset that generates clickbait headlines.
|
| 86 |
|
| 87 |
+
This dataset contains text and headlines from newspapers; therefore, you cannot use it for commercial purposes unless you have the license for the data.
|
| 88 |
+
|
| 89 |
# Dataset Creation
|
| 90 |
The dataset has been meticulously created by hand. We utilize two sources to compile Clickbait articles:
|
| 91 |
- The Twitter user [@ahorrandoclick1](https://twitter.com/ahorrandoclick1), who reposts Clickbait articles along with a hand-crafted summary. Although we use their summaries as a reference, most of them have been rewritten (750 examples from this source).
|
| 92 |
- The web demo [⚔️ClickbaitFighter⚔️](https://iker-clickbaitfighter.hf.space/), which operates a pre-trained model using an early iteration of our dataset. We collect all the model inputs/outputs and manually correct them (100 examples from this source).
|
| 93 |
|
| 94 |
# Who are the annotators?
|
| 95 |
+
The dataset was originally by [Iker García-Ferrero](https://ikergarcia1996.github.io/Iker-Garcia-Ferrero/) and has been validated by [Begoña Altura](https://www.linkedin.com/in/bego%C3%B1a-altuna-78014139).
|
| 96 |
The annotation took ~40 hours.
|
| 97 |
|
| 98 |
|