Datasets:

Modalities:
Tabular
Text
Formats:
json
Languages:
English
ArXiv:
Libraries:
Datasets
pandas
License:

You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

This dataset includes sensitive and explicit material. You may encounter descriptions that are sexually explicit, graphic, or otherwise unsuitable for all audiences. Viewer discretion is strongly advised. Please proceed only if you are comfortable and consent to viewing this type of content. You agree to not use the dataset to conduct experiments that cause harm to human subjects.

Log in or Sign Up to review the conditions and access this dataset content.

X-Sensitive is a multi-label dataset designed to identify sensitive language in social media. It consists of 7 labels and includes a total of 8,000 posts extracted from X. Each post is assigned one or more of the following labels based on its content: Drugs, Sex, Conflictual, Spam, Profanity, and Self-harm. More details in the reference paper.

The goal of X-Sensitive is to serve as a valuable resource for developing online moderation tools. The following models have been trained on X-Sensitive with this aim:

We also provide binary versions of the models, where each post is classified as either sensitive or not-sensitive:

Dataset Structure

Data Splits

Name #Entries
train 5,000
test 2,000
validation 1,000

Data Instances

An example of train looks as follows.

{'#labels': 1,
 'conflictual': 0,
 'conflictual_highlight': array([], dtype=object),
 'drugs': 0,
 'drugs_highlight': array([], dtype=object),
 'keyword': 'fuckin',
 'labels': array(['profanity'], dtype=object),
 'profanity': 1,
 'profanity_highlight': array([array(['fucking'], dtype=object), array(['fucking'], dtype=object),
       array(['fucking'], dtype=object)], dtype=object),
 'selfharm': 0,
 'selfharm_highlight': array([], dtype=object),
 'sex': 0,
 'sex_highlight': array([], dtype=object),
 'spam': 0,
 'spam_highlight': array([], dtype=object),
 'text': 'i think the idea of aliens is so fucking cool'}

Labels

Label Number Label Name Description
0 conflictual Conflictual language. An attack based on protected (race, color, caster, gender, etc) or other categories.
1 profanity Language containing slurs and profanity even if they are not directed towards a specific entity.
2 sex Sexually Explicit Content. Pornographic or other types of sexual content.
4 selfharm Self-harm. Posts depicting, promoting, or glorifying violence or harm against oneself such as eating disorders or suicide.
5 spam Irrelevant content that is unsolicited.

Citation Information

@article{antypas2024sensitive,
  title={Sensitive Content Classification in Social Media: A Holistic Resource and Evaluation},
  author={Antypas, Dimosthenis and Sen, Indira and Perez-Almendros, Carla and Camacho-Collados, Jose and Barbieri, Francesco},
  journal={arXiv preprint arXiv:2411.19832},
  year={2024}
}
Downloads last month
102

Models trained or fine-tuned on cardiffnlp/x_sensitive

Collection including cardiffnlp/x_sensitive