ModernBERT Nli
ModernBERT for reasoning and zero-shot classification
None defined yet.
Huggingface Datasets is a great library, but it lacks standardization, and datasets require preprocessing work to be used interchangeably.
tasksource
automates this and facilitates reproducible multi-task learning scaling.
Each dataset is standardized to either MultipleChoice
, Classification
, or TokenClassification
dataset with identical fields. We do not support generation tasks as they are addressed by promptsource. All implemented preprocessings are in tasks.py or tasks.md. A preprocessing is a function that accepts a dataset and returns the standardized dataset. Preprocessing code is concise and human-readable.
GitHub: https://github.com/sileod/tasksource
pip install tasksource
from tasksource import list_tasks, load_task
df = list_tasks()
for id in df[df.task_type=="MultipleChoice"].id:
dataset = load_task(id)
# all yielded datasets can be used interchangeably
See supported 600+ tasks in tasks.md (+200 MultipleChoice tasks, +200 Classification tasks) and feel free to request a new task. Datasets are downloaded to $HF_DATASETS_CACHE
(as any huggingface dataset), so be sure to have >100GB of space there.
Text encoder pretrained on tasksource reached state-of-the-art results: 🤗/deberta-v3-base-tasksource-nli
I can help you integrate tasksource in your experiments. [email protected]
More details on this article:
@inproceedings{sileo-2024-tasksource-large,
title = "tasksource: A Large Collection of {NLP} tasks with a Structured Dataset Preprocessing Framework",
author = "Sileo, Damien",
booktitle = "Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)",
month = may,
year = "2024",
address = "Torino, Italia",
publisher = "ELRA and ICCL",
url = "https://aclanthology.org/2024.lrec-main.1361",
pages = "15655--15684",
}