add non-factual instances
Browse files
README.md
CHANGED
@@ -16,15 +16,18 @@ tags:
|
|
16 |
---
|
17 |
|
18 |
PlainFact is a high-quality human-annotated dataset with fine-grained explanation (i.e., added information) annotations designed for Plain Language Summarization tasks, along with [PlainQAFact](https://github.com/zhiwenyou103/PlainQAFact) factuality evaluation framework. It is collected from the [Cochrane database](https://www.cochranelibrary.com/) sampled from CELLS dataset ([Guo et al., 2024](https://doi.org/10.1016/j.jbi.2023.104580)).
|
|
|
|
|
19 |
|
20 |
-
|
21 |
|
22 |
-
Currently, we only released the annotation for **Explanation** sentences. We will release the full version of PlainFact soon (including Category and Relation information). Stay tuned!
|
23 |
|
24 |
Here are explanations for the headings:
|
25 |
-
- **
|
26 |
-
- **
|
27 |
- **External**: Whether the sentence includes information does not explicitly present in the scientific abstract. (yes: explanation, no: simplification)
|
|
|
|
|
28 |
|
29 |
You can load our dataset as follows:
|
30 |
```python
|
|
|
16 |
---
|
17 |
|
18 |
PlainFact is a high-quality human-annotated dataset with fine-grained explanation (i.e., added information) annotations designed for Plain Language Summarization tasks, along with [PlainQAFact](https://github.com/zhiwenyou103/PlainQAFact) factuality evaluation framework. It is collected from the [Cochrane database](https://www.cochranelibrary.com/) sampled from CELLS dataset ([Guo et al., 2024](https://doi.org/10.1016/j.jbi.2023.104580)).
|
19 |
+
PlainFact is a sentence-level benchmark that splits the summaries into sentences with fine-grained explanation annotations. In total, we have 200 plain language summary-abstract pairs (2,740 sentences).
|
20 |
+
In addition to all factual plain language sentences, we also generate contrasting non-factual examples for each plain language sentence. These contrasting examples are perturbed using GPT-4o, following the perturbation criteria for faithfulness introduced in APPLS ([Guo et al., 2024](https://aclanthology.org/2024.emnlp-main.519/)).
|
21 |
|
22 |
+
> Currently, we only released the annotation for **Explanation** sentences. We will release the full version of PlainFact soon (including Category and Relation information). Stay tuned!
|
23 |
|
|
|
24 |
|
25 |
Here are explanations for the headings:
|
26 |
+
- **Target_Sentence_factual**: The all factual plain language sentence.
|
27 |
+
- **Target_Sentence_non_factual**: The perturbed (non-factual) plain language sentence.
|
28 |
- **External**: Whether the sentence includes information does not explicitly present in the scientific abstract. (yes: explanation, no: simplification)
|
29 |
+
- **Original_Abstract**: The scientific abstract corresponding to each sentence/summary.
|
30 |
+
|
31 |
|
32 |
You can load our dataset as follows:
|
33 |
```python
|