|
--- |
|
task_categories: |
|
- text-classification |
|
language: |
|
- en |
|
size_categories: |
|
- 1K<n<10K |
|
extra_gated_prompt: >- |
|
This dataset includes sensitive and explicit material. You may encounter |
|
descriptions that are sexually explicit, graphic, or otherwise unsuitable for |
|
all audiences. Viewer discretion is strongly advised. Please proceed only if |
|
you are comfortable and consent to viewing this type of content. You agree to |
|
not use the dataset to conduct experiments that cause harm to human subjects. |
|
extra_gated_fields: |
|
Company: text |
|
Country: country |
|
I want to use this dataset for: |
|
type: select |
|
options: |
|
- Research |
|
- Education |
|
- label: Other |
|
value: other |
|
I agree to use this dataset for non-commercial use ONLY: checkbox |
|
license: cc-by-2.0 |
|
--- |
|
|
|
***X-Sensitive*** is a multi-label dataset designed to identify sensitive language in social media. |
|
It consists of 7 labels and includes a total of 8,000 posts extracted from ***X***. |
|
Each post is assigned one or more of the following labels based on its content: ***Drugs, Sex, Conflictual, Spam, Profanity, and Self-harm***. |
|
More details in the [reference paper](https://arxiv.org/abs/2411.19832). |
|
|
|
The goal of ***X-Sensitive*** is to serve as a valuable resource for developing online moderation tools. The following models have been trained on ***X-Sensitive*** with this aim: |
|
|
|
- [twitter-roberta-large-sensitive-multilabel](https://huggingface.co/cardiffnlp/twitter-roberta-large-sensitive-multilabel) |
|
- [twitter-roberta-base-sensitive-multilabel](https://huggingface.co/cardiffnlp/twitter-roberta-base-sensitive-multilabel) |
|
|
|
We also provide binary versions of the models, where each post is classified as either sensitive or not-sensitive: |
|
|
|
- [twitter-roberta-large-sensitive-binary](https://huggingface.co/cardiffnlp/twitter-roberta-large-sensitive-binary) |
|
- [twitter-roberta-base-sensitive-binary](https://huggingface.co/cardiffnlp/twitter-roberta-base-sensitive-binary) |
|
|
|
## Dataset Structure |
|
|
|
### Data Splits |
|
|
|
| Name | #Entries | |
|
|--------------|---------------| |
|
| ***train*** | 5,000 | |
|
| ***test*** | 2,000 | |
|
| ***validation*** | 1,000 | |
|
|
|
### Data Instances |
|
An example of `train` looks as follows. |
|
|
|
```python |
|
{'#labels': 1, |
|
'conflictual': 0, |
|
'conflictual_highlight': array([], dtype=object), |
|
'drugs': 0, |
|
'drugs_highlight': array([], dtype=object), |
|
'keyword': 'fuckin', |
|
'labels': array(['profanity'], dtype=object), |
|
'profanity': 1, |
|
'profanity_highlight': array([array(['fucking'], dtype=object), array(['fucking'], dtype=object), |
|
array(['fucking'], dtype=object)], dtype=object), |
|
'selfharm': 0, |
|
'selfharm_highlight': array([], dtype=object), |
|
'sex': 0, |
|
'sex_highlight': array([], dtype=object), |
|
'spam': 0, |
|
'spam_highlight': array([], dtype=object), |
|
'text': 'i think the idea of aliens is so fucking cool'} |
|
``` |
|
|
|
### Labels |
|
| Label Number | Label Name | Description |
|
|--------------|---------------|---------------| |
|
| 0 | conflictual | Conflictual language. An attack based on protected (race, color, caster, gender, etc) or other categories.| |
|
| 1 | profanity | Language containing slurs and profanity even if they are not directed towards a specific entity.| |
|
| 2 | sex | Sexually Explicit Content. Pornographic or other types of sexual content. |
|
| 4 | selfharm | Self-harm. Posts depicting, promoting, or glorifying violence or harm against oneself such as eating disorders or suicide. |
|
| 5 | spam | Irrelevant content that is unsolicited. |
|
|
|
|
|
|
|
|
|
## Citation Information |
|
``` |
|
@article{antypas2024sensitive, |
|
title={Sensitive Content Classification in Social Media: A Holistic Resource and Evaluation}, |
|
author={Antypas, Dimosthenis and Sen, Indira and Perez-Almendros, Carla and Camacho-Collados, Jose and Barbieri, Francesco}, |
|
journal={arXiv preprint arXiv:2411.19832}, |
|
year={2024} |
|
} |
|
``` |