Dataset Viewer
Auto-converted to Parquet
datasetId
stringlengths
5
121
author
stringlengths
2
42
last_modified
unknowndate
2021-04-29 15:34:29
2025-04-15 01:30:44
downloads
int64
0
5.7M
likes
int64
0
7.69k
tags
sequencelengths
1
7.92k
task_categories
sequencelengths
0
48
createdAt
unknowndate
2022-03-02 23:29:22
2025-04-15 01:30:44
card
stringlengths
21
1M
KakologArchives/KakologArchives
KakologArchives
"2025-04-15T01:26:39"
5,696,585
15
[ "task_categories:text-classification", "language:ja", "license:mit", "region:us" ]
[ "text-classification" ]
"2023-05-12T13:31:56"
--- pretty_name: ニコニコ実況 過去ログアーカイブ license: mit language: - ja task_categories: - text-classification --- # ニコニコ実況 過去ログアーカイブ ニコニコ実況 過去ログアーカイブは、[ニコニコ実況](https://jk.nicovideo.jp) のサービス開始から現在までのすべての過去ログコメントを収集したデータセットです。 去る2020年12月、ニコニコ実況は [ニコニコ生放送内の一公式チャンネルとしてリニューアル](https://blog.nicovideo.jp/niconews/143148.html) されました。 これに伴い、2009年11月から運用されてきた旧システムは提供終了となり(事実上のサービス終了)、torne や BRAVIA などの家電への対応が軒並み終了する中、当時の生の声が詰まった約11年分の過去ログも同時に失われることとなってしまいました。 そこで 5ch の DTV 板の住民が中心となり、旧ニコニコ実況が終了するまでに11年分の全チャンネルの過去ログをアーカイブする計画が立ち上がりました。紆余曲折あり Nekopanda 氏が約11年分のラジオや BS も含めた全チャンネルの過去ログを完璧に取得してくださったおかげで、11年分の過去ログが電子の海に消えていく事態は回避できました。 しかし、旧 API が廃止されてしまったため過去ログを API 経由で取得することができなくなり、またアーカイブされた過去ログから見たい範囲のログを探す場合も、アーカイブのサイズが合計約 150GB もあることから、とても以前のように手軽に過去ログに触れることはできなくなってしまいました。 一方、ニコニコ生放送内の一公式チャンネルとして移行した新ニコニコ実況では、タイムシフト(旧ニコニコ実況での過去ログに相当)の視聴期限は3週間までとなっているため、その期限を過ぎると過去ログは視聴できなくなってしまいます。 また一般会員は事前にタイムシフト予約をしておく必要があるなど、以前のような利便性は失われています。 私たちは、ニコニコ実況に投稿された日本のテレビ放送についてのコメントは、当時の世相や時代背景を端的に表す、歴史的価値のある資料だと考えています。 このデータセットでは、ニコニコ実況のすべての過去ログを後世に残すべく、Nekopanda 氏が配布されていた旧ニコニコ実況の 2020/12/15 までのすべての過去ログに加え、コミュニティでの実況番組も含めた新ニコニコ実況、さらに 2024/06/10 からは実況用代替コメントサーバーである [NX-Jikkyo](https://nx-jikkyo.tsukumijima.net/) の当日分の過去ログを5分に1回収集し、随時反映しています。 過去ログをかんたんに取得するための [API](https://jikkyo.tsukumijima.net/) もあります。 よろしければそちらもご活用ください。 ## Dataset Structure ### Builder Config | Key | Value Type | Default Value | Description | | --------------- | ---------- | ------------- | ----------- | | channel_id | string | None | 過去ログを取得するニコニコ実況チャンネルの ID (省略時はすべてのチャンネル) | | year | int | None | 取得する過去ログの年 (省略時はすべての年) | | number_of_files | int | None | 取得する過去ログファイルの数 (省略時はすべてのファイル) | ### Data Splits | Split | Approximate Size | Description | | ------- | ---------------- | ----------- | | sample | 1GB | サンプルとして、2022年中に投稿された TOKYO MX (ID: jk9) のすべての過去ログコメントを取得します。1GB ほどあります。 | | all | 190GB | 全チャンネル/全期間のすべての過去ログコメントを取得します。190GB 以上あるため注意してください。 | ### Data Fields | Field | Type | Description | | --------------- | -------- | ----------- | | thread | string | コメントのスレッド ID | | no | int64 | コメント番号 (コメ番) | | vpos | int64 | スレッド ID から起算したコメントの再生位置 (1/100秒) | | date | int64 | コメント投稿時間の UNIX タイムスタンプ | | date_usec | int64 | コメント投稿時間の小数点以下の時間 | | user_id | string | ユーザー ID (コマンドに 184 が指定されている場合は匿名化され、1週間ほどでシャッフルされる) | | mail | string | コメントのコマンド (184, red naka big など、省略されることもある) | | premium | boolean | コメントしたユーザーがプレミアム会員であれば True | | anonymity | boolean | 匿名コメントであれば True | | content | string | コメント本文 (AA など、まれに複数行コメントがあるので注意) | ## Example ```python from datasets import load_dataset dataset = load_dataset('KakologArchives/KakologArchives', 'all', channel_id='jk211', year=2023, number_of_files=10) for data in dataset['train']: print(data) ``` ## Licensing Information [MIT License](https://opensource.org/license/mit/)
opentensor/openvalidators
opentensor
"2023-09-25T14:03:34"
3,863,361
9
[ "license:mit", "size_categories:1M<n<10M", "region:us" ]
null
"2023-06-15T15:29:34"
--- license: mit viewer: False size_categories: - 1M<n<10M --- # Dataset Card for Openvalidators dataset ## Dataset Description - **Repository:** https://github.com/opentensor/validators - **Homepage:** https://bittensor.com/ ### Dataset Summary The OpenValidators dataset, created by the OpenTensor Foundation, is a continuously growing collection of data generated by the [OpenValidators](https://github.com/opentensor/validators) project in [W&B](https://wandb.ai/opentensor-dev/openvalidators/table). It contains millions of records and serves researchers, data scientists, and miners in the Bittensor network. The dataset provides information on network performance, node behaviors, and wandb run details. Researchers can gain insights and detect patterns, while data scientists can use it for training models and analysis. Miners can use the generated data to fine-tune their models and enhance their incentives in the network. The dataset's continuous updates support collaboration and innovation in decentralized computing. ### Version support and revisions This dataset is in constant evolution, so in order to facilitate data management, each data schema is versioned in a hugging face dataset branch, so legacy data can be easily retrieved. The main branch (or default revision) will always be the latest version of the dataset, following the latest schema adopted by the openvalidators. The current state of data organization is as following: - `v1.0`: All data collected from the first openvalidators schema, ranging from version `1.0.0` to `1.0.8`. - `main`: Current state of the dataset, following the latest schema adopted by the openvalidators (>= `1.1.0`). ### How to use The `datasets` library allows you to load and pre-process your dataset in pure Python, at scale. The OpenValidators dataset gives you the granularity of extracting data by **run_id**, by **OpenValidators version** and by **multiple OpenValidators versions.** The dataset can be downloaded and prepared in one call to your local drive by using the `load_dataset` function. **Downloading by run id** For example, to download the data for a specific run, simply specify the corresponding **OpenValidators version** and the **wandb run id** in the format `version/raw_data/run_id.parquet`: ```python from datasets import load_dataset version = '1.1.0' # OpenValidators version run_id = '0drg98iy' # WandB run id run_id_dataset = load_dataset('opentensor/openvalidators', data_files=f'{version}/raw_data/{run_id}.parquet') ``` _Please note that only completed run_ids are included in the dataset. Runs that are still in progress will be ingested shortly after they finish._ **Downloading by OpenValidators version** One can also leverage the `datasets` library to download all the runs within a determined **OpenValidators** version. That can be useful for researchers and data enthusiasts that are looking to do analysis in a specific **OpenValidators** version state. ```python from datasets import load_dataset version = '1.1.0' # Openvalidators version version_dataset = load_dataset('opentensor/openvalidators', data_files=f'{version}/raw_data/*') ``` **Downloading by multiple OpenValidators version** Utilizing the `datasets` library, users can efficiently download runs from multiple **OpenValidators** versions. By accessing data from various OpenValidators versions, users can undertake downstream tasks such as data fine-tuning for mining or to perform big data analysis. ```python from datasets import load_dataset versions = ['1.1.0', '1.1.1', ...] # Desired versions for extraction data_files = [f'{version}/raw_data/*' for version in versions] # Set data files directories dataset = load_dataset('opentensor/openvalidators', data_files={ 'test': data_files }) ``` **Downloading legacy data using revisions** ```python from datasets import load_dataset version = '1.0.4' # OpenValidators version run_id = '0plco3n0' # WandB run id revision = 'v1.0' # Dataset revision run_id_dataset = load_dataset('opentensor/openvalidators', data_files=f'{version}/raw_data/{run_id}.parquet', revision=revision) ``` > Note: You can interact with legacy data in all the ways mentioned above, as long as your data scope is within the same revision. **Analyzing metadata** All the state related to the details of the wandb data ingestion can be accessed easily using pandas and hugging face datasets structure. This data contains relevant information regarding the metadata of the run, including user information, config information and ingestion state. ```python import pandas as pd version = '1.1.0' # OpenValidators version for metadata analysis df = pd.read_csv(f'hf://datasets/opentensor/openvalidators/{version}/metadata.csv') ``` ## Dataset Structure ### Data Instances **versioned raw_data** The data is provided as-in the wandb logs, without further preprocessing or tokenization. This data is located at `version/raw_data` where each file is a wandb run. **metadata** This dataset defines the current state of the wandb data ingestion by **run id**. ### Data Fields **Raw data** The versioned raw_data collected from W&B follows the following schema: - `rewards`: (float64) Reward vector for given step - `completion_times`: (float64) List of completion times for a given prompt - `completions`: (string) List of completions received for a given prompt - `_runtime`: (float64) Runtime of the event - `_timestamp`: (float64) Timestamp of the event - `name`: (string) Prompt type, e.g. 'followup', 'answer', 'augment' - `block`: (float64) Current block at given step - `gating_loss`: (float64) Gating model loss for given step - `rlhf_reward_model`: (float64) Output vector of the rlhf reward model - `relevance_filter`: (float64) Output vector of the relevance scoring reward model - `dahoas_reward_model`: (float64) Output vector of the dahoas reward model - `blacklist_filter`:(float64) Output vector of the blacklist filter - `nsfw_filter`:(float64) Output vector of the nsfw filter - `prompt_reward_model`:(float64) Output vector of the prompt reward model - `reciprocate_reward_model`:(float64) Output vector of the reciprocate reward model - `diversity_reward_model`:(float64) Output vector of the diversity reward model - `set_weights`: (float64) Output vector of the set weights - `uids`:(int64) Queried uids - `_step`: (int64) Step of the event - `prompt`: (string) Prompt text string - `step_length`: (float64) Elapsed time between the beginning of a run step to the end of a run step - `best`: (string) Best completion for given prompt **Metadata** - `run_id`: (string) Wandb Run Id - `completed`: (boolean) Flag indicating if the run_id is completed (finished, crashed or killed) - `downloaded`: (boolean) Flag indicating if the run_id data has been downloaded - `last_checkpoint`: (string) Last checkpoint of the run_id - `hotkey`: (string) Hotkey associated with the run_id - `openvalidators_version`: (string) Version of OpenValidators associated with the run_id - `problematic`: (boolean) Flag indicating if the run_id data had problems to be ingested - `problematic_reason`: (string) Reason for the run_id being problematic (Exception message) - `wandb_json_config`: (string) JSON configuration associated with the run_id in Wandb - `wandb_run_name`: (string) Name of the Wandb run - `wandb_user_info`: (string) Username information associated with the Wandb run - `wandb_tags`: (list) List of tags associated with the Wandb run - `wandb_createdAt`: (string) Timestamp of the run creation in Wandb ## Dataset Creation ### Curation Rationale This dataset was curated to provide a comprehensive and reliable collection of historical data obtained by the execution of different OpenValidators in the bittensor network. The goal is to support researchers, data scientists and developers with data generated in the network, facilitating the discovery of new insights, network analysis, troubleshooting, and data extraction for downstream tasks like mining. ### Source Data #### Initial Data Collection and Normalization The initial data collection process for this dataset involves recurrent collection by a specialized worker responsible for extracting data from wandb and ingesting it into the Hugging Face datasets structure. The collected data is organized based on the OpenValidators version and run ID to facilitate efficient data management and granular access. Each run is collected based on its corresponding OpenValidators version tag and grouped into version-specific folders. Within each version folder, a `metadata.csv` file is included to manage the collection state, while the raw data of each run is saved in the `.parquet` format with the file name corresponding to the run ID (e.g., `run_id.parquet`). Please note that the code for this data collection process will be released for transparency and reproducibility. #### Who are the source language producers? The language producers for this dataset are all the openvalidators that are logging their data into wandb in conjunction of other nodes of the bittensor network. The main wandb page where the data is sent can be accessed at https://wandb.ai/opentensor-dev/openvalidators/table. ### Licensing Information The dataset is licensed under the [MIT License](https://github.com/opentensor/validators/blob/main/LICENSE) ### Supported Tasks and Leaderboards [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [More Information Needed]
huggingface/documentation-images
huggingface
"2025-04-14T16:11:20"
3,248,244
59
[ "license:cc-by-nc-sa-4.0", "size_categories:n<1K", "format:imagefolder", "modality:image", "library:datasets", "library:mlcroissant", "region:us" ]
null
"2022-03-02T23:29:22"
--- license: cc-by-nc-sa-4.0 --- ### This dataset contains images used in the documentation of HuggingFace's libraries. HF Team: Please make sure you optimize the assets before uploading them. My favorite tool for this is https://tinypng.com/.
huggingface/badges
huggingface
"2025-04-08T17:39:54"
1,108,989
42
[ "license:mit", "size_categories:n<1K", "format:imagefolder", "modality:image", "library:datasets", "library:mlcroissant", "region:us" ]
null
"2023-02-02T14:55:23"
--- license: mit thumbnail: "https://huggingface.co/datasets/huggingface/badges/resolve/main/badges-thumbnail.png" --- <style> .prose img { display: inline; margin: 0 6px !important; } .prose table { max-width: 320px; margin: 0; } </style> # Badges A set of badges you can use anywhere. Just update the anchor URL to point to the correct action for your Space. Light or dark background with 4 sizes available: small, medium, large, and extra large. ## How to use? - With markdown, just copy the badge from: https://huggingface.co/datasets/huggingface/badges/blob/main/README.md?code=true - With HTML, inspect this page with your web browser and copy the outer html. ## Available sizes | Small | Medium | Large | Extra large | | ------------- | :-----------: | ------------- | ------------- | | 20px (height) | 24px (height) | 36px (height) | 48px (height) | ## Follow us on HF [![Follow us on HF](https://huggingface.co/datasets/huggingface/badges/resolve/main/follow-us-on-hf-sm.svg)](https://huggingface.co/organizations) [![Follow us on HF](https://huggingface.co/datasets/huggingface/badges/resolve/main/follow-us-on-hf-sm-dark.svg)](https://huggingface.co/organizations) [![Follow us on HF](https://huggingface.co/datasets/huggingface/badges/resolve/main/follow-us-on-hf-md.svg)](https://huggingface.co/organizations) [![Follow us on HF](https://huggingface.co/datasets/huggingface/badges/resolve/main/follow-us-on-hf-md-dark.svg)](https://huggingface.co/organizations) [![Follow us on HF](https://huggingface.co/datasets/huggingface/badges/resolve/main/follow-us-on-hf-lg.svg)](https://huggingface.co/organizations) [![Follow us on HF](https://huggingface.co/datasets/huggingface/badges/resolve/main/follow-us-on-hf-lg-dark.svg)](https://huggingface.co/organizations) [![Follow us on HF](https://huggingface.co/datasets/huggingface/badges/resolve/main/follow-us-on-hf-xl.svg)](https://huggingface.co/organizations) [![Follow us on HF](https://huggingface.co/datasets/huggingface/badges/resolve/main/follow-us-on-hf-xl-dark.svg)](https://huggingface.co/organizations) ## Paper page [![Paper page](https://huggingface.co/datasets/huggingface/badges/resolve/main/paper-page-sm.svg)](https://huggingface.co/papers) [![Paper page](https://huggingface.co/datasets/huggingface/badges/resolve/main/paper-page-sm-dark.svg)](https://huggingface.co/papers) [![Paper page](https://huggingface.co/datasets/huggingface/badges/resolve/main/paper-page-md.svg)](https://huggingface.co/papers) [![Paper page](https://huggingface.co/datasets/huggingface/badges/resolve/main/paper-page-md-dark.svg)](https://huggingface.co/papers) [![Paper page](https://huggingface.co/datasets/huggingface/badges/resolve/main/paper-page-lg.svg)](https://huggingface.co/papers) [![Paper page](https://huggingface.co/datasets/huggingface/badges/resolve/main/paper-page-lg-dark.svg)](https://huggingface.co/papers) [![Paper page](https://huggingface.co/datasets/huggingface/badges/resolve/main/paper-page-xl.svg)](https://huggingface.co/papers) [![Paper page](https://huggingface.co/datasets/huggingface/badges/resolve/main/paper-page-xl-dark.svg)](https://huggingface.co/papers) ## Deploy on Spaces [![Deploy on Spaces](https://huggingface.co/datasets/huggingface/badges/resolve/main/deploy-on-spaces-sm.svg)](https://huggingface.co/new-space) [![Deploy on Spaces](https://huggingface.co/datasets/huggingface/badges/resolve/main/deploy-on-spaces-sm-dark.svg)](https://huggingface.co/new-space) [![Deploy on Spaces](https://huggingface.co/datasets/huggingface/badges/resolve/main/deploy-on-spaces-md.svg)](https://huggingface.co/new-space) [![Deploy on Spaces](https://huggingface.co/datasets/huggingface/badges/resolve/main/deploy-on-spaces-md-dark.svg)](https://huggingface.co/new-space) [![Deploy on Spaces](https://huggingface.co/datasets/huggingface/badges/resolve/main/deploy-on-spaces-lg.svg)](https://huggingface.co/new-space) [![Deploy on Spaces](https://huggingface.co/datasets/huggingface/badges/resolve/main/deploy-on-spaces-lg-dark.svg)](https://huggingface.co/new-space) [![Deploy on Spaces](https://huggingface.co/datasets/huggingface/badges/resolve/main/deploy-on-spaces-xl.svg)](https://huggingface.co/new-space) [![Deploy on Spaces](https://huggingface.co/datasets/huggingface/badges/resolve/main/deploy-on-spaces-xl-dark.svg)](https://huggingface.co/new-space) ## Duplicate this Space [![Duplicate this Space](https://huggingface.co/datasets/huggingface/badges/resolve/main/duplicate-this-space-sm.svg)](https://huggingface.co/spaces/huggingface-projects/diffusers-gallery?duplicate=true) [![Duplicate this Space](https://huggingface.co/datasets/huggingface/badges/resolve/main/duplicate-this-space-sm-dark.svg)](https://huggingface.co/spaces/huggingface-projects/diffusers-gallery?duplicate=true) [![Duplicate this Space](https://huggingface.co/datasets/huggingface/badges/resolve/main/duplicate-this-space-md.svg)](https://huggingface.co/spaces/huggingface-projects/diffusers-gallery?duplicate=true) [![Duplicate this Space](https://huggingface.co/datasets/huggingface/badges/resolve/main/duplicate-this-space-md-dark.svg)](https://huggingface.co/spaces/huggingface-projects/diffusers-gallery?duplicate=true) [![Duplicate this Space](https://huggingface.co/datasets/huggingface/badges/resolve/main/duplicate-this-space-lg.svg)](https://huggingface.co/spaces/huggingface-projects/diffusers-gallery?duplicate=true) [![Duplicate this Space](https://huggingface.co/datasets/huggingface/badges/resolve/main/duplicate-this-space-lg-dark.svg)](https://huggingface.co/spaces/huggingface-projects/diffusers-gallery?duplicate=true) [![Duplicate this Space](https://huggingface.co/datasets/huggingface/badges/resolve/main/duplicate-this-space-xl.svg)](https://huggingface.co/spaces/huggingface-projects/diffusers-gallery?duplicate=true) [![Duplicate this Space](https://huggingface.co/datasets/huggingface/badges/resolve/main/duplicate-this-space-xl-dark.svg)](https://huggingface.co/spaces/huggingface-projects/diffusers-gallery?duplicate=true) ## Open in HF Spaces [![Open in Spaces](https://huggingface.co/datasets/huggingface/badges/resolve/main/open-in-hf-spaces-sm.svg)](https://huggingface.co/spaces) [![Open in Spaces](https://huggingface.co/datasets/huggingface/badges/resolve/main/open-in-hf-spaces-sm-dark.svg)](https://huggingface.co/spaces) [![Open in Spaces](https://huggingface.co/datasets/huggingface/badges/resolve/main/open-in-hf-spaces-md.svg)](https://huggingface.co/spaces) [![Open in Spaces](https://huggingface.co/datasets/huggingface/badges/resolve/main/open-in-hf-spaces-md-dark.svg)](https://huggingface.co/spaces) [![Open in Spaces](https://huggingface.co/datasets/huggingface/badges/resolve/main/open-in-hf-spaces-lg.svg)](https://huggingface.co/spaces) [![Open in Spaces](https://huggingface.co/datasets/huggingface/badges/resolve/main/open-in-hf-spaces-lg-dark.svg)](https://huggingface.co/spaces) [![Open in Spaces](https://huggingface.co/datasets/huggingface/badges/resolve/main/open-in-hf-spaces-xl.svg)](https://huggingface.co/spaces) [![Open in Spaces](https://huggingface.co/datasets/huggingface/badges/resolve/main/open-in-hf-spaces-xl-dark.svg)](https://huggingface.co/spaces) ## Open a Discussion [![Open in Spaces](https://huggingface.co/datasets/huggingface/badges/resolve/main/open-a-discussion-sm.svg)](https://huggingface.co/spaces) [![Open in Spaces](https://huggingface.co/datasets/huggingface/badges/resolve/main/open-a-discussion-sm-dark.svg)](https://huggingface.co/spaces) [![Open in Spaces](https://huggingface.co/datasets/huggingface/badges/resolve/main/open-a-discussion-md.svg)](https://huggingface.co/spaces) [![Open in Spaces](https://huggingface.co/datasets/huggingface/badges/resolve/main/open-a-discussion-md-dark.svg)](https://huggingface.co/spaces) [![Open in Spaces](https://huggingface.co/datasets/huggingface/badges/resolve/main/open-a-discussion-lg.svg)](https://huggingface.co/spaces) [![Open in Spaces](https://huggingface.co/datasets/huggingface/badges/resolve/main/open-a-discussion-lg-dark.svg)](https://huggingface.co/spaces) [![Open in Spaces](https://huggingface.co/datasets/huggingface/badges/resolve/main/open-a-discussion-xl.svg)](https://huggingface.co/spaces) [![Open in Spaces](https://huggingface.co/datasets/huggingface/badges/resolve/main/open-a-discussion-xl-dark.svg)](https://huggingface.co/spaces) ## Share to Community [![Share to Community](https://huggingface.co/datasets/huggingface/badges/resolve/main/share-to-community-sm.svg)](https://huggingface.co/spaces) [![Share to Community](https://huggingface.co/datasets/huggingface/badges/resolve/main/share-to-community-sm-dark.svg)](https://huggingface.co/spaces) [![Share to Community](https://huggingface.co/datasets/huggingface/badges/resolve/main/share-to-community-md.svg)](https://huggingface.co/spaces) [![Share to Community](https://huggingface.co/datasets/huggingface/badges/resolve/main/share-to-community-md-dark.svg)](https://huggingface.co/spaces) [![Share to Community](https://huggingface.co/datasets/huggingface/badges/resolve/main/share-to-community-lg.svg)](https://huggingface.co/spaces) [![Share to Community](https://huggingface.co/datasets/huggingface/badges/resolve/main/share-to-community-lg-dark.svg)](https://huggingface.co/spaces) [![Share to Community](https://huggingface.co/datasets/huggingface/badges/resolve/main/share-to-community-xl.svg)](https://huggingface.co/spaces) [![Share to Community](https://huggingface.co/datasets/huggingface/badges/resolve/main/share-to-community-xl-dark.svg)](https://huggingface.co/spaces) ## Sign in with Hugging Face [![Sign in with Hugging Face](https://huggingface.co/datasets/huggingface/badges/resolve/main/sign-in-with-huggingface-sm.svg)](https://huggingface.co/) [![Sign in with Hugging Face](https://huggingface.co/datasets/huggingface/badges/resolve/main/sign-in-with-huggingface-sm-dark.svg)](https://huggingface.co/) [![Sign in with Hugging Face](https://huggingface.co/datasets/huggingface/badges/resolve/main/sign-in-with-huggingface-md.svg)](https://huggingface.co/) [![Sign in with Hugging Face](https://huggingface.co/datasets/huggingface/badges/resolve/main/sign-in-with-huggingface-md-dark.svg)](https://huggingface.co/) [![Sign in with Hugging Face](https://huggingface.co/datasets/huggingface/badges/resolve/main/sign-in-with-huggingface-lg.svg)](https://huggingface.co/) [![Sign in with Hugging Face](https://huggingface.co/datasets/huggingface/badges/resolve/main/sign-in-with-huggingface-lg-dark.svg)](https://huggingface.co/) [![Sign in with Hugging Face](https://huggingface.co/datasets/huggingface/badges/resolve/main/sign-in-with-huggingface-xl.svg)](https://huggingface.co/) [![Sign in with Hugging Face](https://huggingface.co/datasets/huggingface/badges/resolve/main/sign-in-with-huggingface-xl-dark.svg)](https://huggingface.co/) ## Open a Pull Request [![Open a Pull Request](https://huggingface.co/datasets/huggingface/badges/resolve/main/open-a-pr-sm.svg)](https://huggingface.co/spaces/victor/ChatUI/discussions) [![Open a Pull Request](https://huggingface.co/datasets/huggingface/badges/resolve/main/open-a-pr-sm-dark.svg)](https://huggingface.co/spaces/victor/ChatUI/discussions) [![Open a Pull Request](https://huggingface.co/datasets/huggingface/badges/resolve/main/open-a-pr-md.svg)](https://huggingface.co/spaces/victor/ChatUI/discussions) [![Open a Pull Request](https://huggingface.co/datasets/huggingface/badges/resolve/main/open-a-pr-md-dark.svg)](https://huggingface.co/spaces/victor/ChatUI/discussions) [![Open a Pull Request](https://huggingface.co/datasets/huggingface/badges/resolve/main/open-a-pr-lg.svg)](https://huggingface.co/spaces/victor/ChatUI/discussions) [![Open a Pull Request](https://huggingface.co/datasets/huggingface/badges/resolve/main/open-a-pr-lg-dark.svg)](https://huggingface.co/spaces/victor/ChatUI/discussions) [![Open a Pull Request](https://huggingface.co/datasets/huggingface/badges/resolve/main/open-a-pr-xl.svg)](https://huggingface.co/spaces/victor/ChatUI/discussions) [![Open a Pull Request](https://huggingface.co/datasets/huggingface/badges/resolve/main/open-a-pr-xl-dark.svg)](https://huggingface.co/spaces/victor/ChatUI/discussions) ## Subscribe to PRO [![Subscribe to PRO](https://huggingface.co/datasets/huggingface/badges/resolve/main/subscribe-to-pro-sm.svg)](https://huggingface.co/subscribe/pro) [![Subscribe to PRO](https://huggingface.co/datasets/huggingface/badges/resolve/main/subscribe-to-pro-sm-dark.svg)](https://huggingface.co/subscribe/pro) [![Subscribe to PRO](https://huggingface.co/datasets/huggingface/badges/resolve/main/subscribe-to-pro-md.svg)](https://huggingface.co/subscribe/pro) [![Subscribe to PRO](https://huggingface.co/datasets/huggingface/badges/resolve/main/subscribe-to-pro-md-dark.svg)](https://huggingface.co/subscribe/pro) [![Subscribe to PRO](https://huggingface.co/datasets/huggingface/badges/resolve/main/subscribe-to-pro-lg.svg)](https://huggingface.co/subscribe/pro) [![Subscribe to PRO](https://huggingface.co/datasets/huggingface/badges/resolve/main/subscribe-to-pro-lg-dark.svg)](https://huggingface.co/subscribe/pro) [![Subscribe to PRO](https://huggingface.co/datasets/huggingface/badges/resolve/main/subscribe-to-pro-xl.svg)](https://huggingface.co/subscribe/pro) [![Subscribe to PRO](https://huggingface.co/datasets/huggingface/badges/resolve/main/subscribe-to-pro-xl-dark.svg)](https://huggingface.co/subscribe/pro) ## Follow me on HF [![Follow me on HF](https://huggingface.co/datasets/huggingface/badges/resolve/main/follow-me-on-HF-sm.svg)](https://huggingface.co/Chunte) [![Follow me on HF](https://huggingface.co/datasets/huggingface/badges/resolve/main/follow-me-on-HF-sm-dark.svg)](https://huggingface.co/Chunte) [![Follow me on HF](https://huggingface.co/datasets/huggingface/badges/resolve/main/follow-me-on-HF-md.svg)](https://huggingface.co/Chunte) [![Follow me on HF](https://huggingface.co/datasets/huggingface/badges/resolve/main/follow-me-on-HF-md-dark.svg)](https://huggingface.co/Chunte) [![Follow me on HF](https://huggingface.co/datasets/huggingface/badges/resolve/main/follow-me-on-HF-lg.svg)](https://huggingface.co/Chunte) [![Follow me on HF](https://huggingface.co/datasets/huggingface/badges/resolve/main/follow-me-on-HF-lg-dark.svg)](https://huggingface.co/Chunte) [![Follow me on HF](https://huggingface.co/datasets/huggingface/badges/resolve/main/follow-me-on-HF-xl.svg)](https://huggingface.co/Chunte) [![Follow me on HF](https://huggingface.co/datasets/huggingface/badges/resolve/main/follow-me-on-HF-xl-dark.svg)](https://huggingface.co/Chunte) ## Model on HF [![Model on HF](https://huggingface.co/datasets/huggingface/badges/resolve/main/model-on-hf-sm.svg)](https://huggingface.co/models) [![Model on HF](https://huggingface.co/datasets/huggingface/badges/resolve/main/model-on-hf-sm-dark.svg)](https://huggingface.co/models) [![Model on HF](https://huggingface.co/datasets/huggingface/badges/resolve/main/model-on-hf-md.svg)](https://huggingface.co/models) [![Model on HF](https://huggingface.co/datasets/huggingface/badges/resolve/main/model-on-hf-md-dark.svg)](https://huggingface.co/models) [![Model on HF](https://huggingface.co/datasets/huggingface/badges/resolve/main/model-on-hf-lg.svg)](https://huggingface.co/models) [![Model on HF](https://huggingface.co/datasets/huggingface/badges/resolve/main/model-on-hf-lg-dark.svg)](https://huggingface.co/models) [![Model on HF](https://huggingface.co/datasets/huggingface/badges/resolve/main/model-on-hf-xl.svg)](https://huggingface.co/models) [![Model on HF](https://huggingface.co/datasets/huggingface/badges/resolve/main/model-on-hf-xl-dark.svg)](https://huggingface.co/models) ## Dataset on HF [![Dataset on HF](https://huggingface.co/datasets/huggingface/badges/resolve/main/dataset-on-hf-sm.svg)](https://huggingface.co/datasets) [![Dataset on HF](https://huggingface.co/datasets/huggingface/badges/resolve/main/dataset-on-hf-sm-dark.svg)](https://huggingface.co/datasets) [![Dataset on HF](https://huggingface.co/datasets/huggingface/badges/resolve/main/dataset-on-hf-md.svg)](https://huggingface.co/datasets) [![Dataset on HF](https://huggingface.co/datasets/huggingface/badges/resolve/main/dataset-on-hf-md-dark.svg)](https://huggingface.co/datasets) [![Dataset on HF](https://huggingface.co/datasets/huggingface/badges/resolve/main/dataset-on-hf-lg.svg)](https://huggingface.co/datasets) [![Dataset on HF](https://huggingface.co/datasets/huggingface/badges/resolve/main/dataset-on-hf-lg-dark.svg)](https://huggingface.co/datasets) [![Dataset on HF](https://huggingface.co/datasets/huggingface/badges/resolve/main/dataset-on-hf-xl.svg)](https://huggingface.co/datasets) [![Dataset on HF](https://huggingface.co/datasets/huggingface/badges/resolve/main/dataset-on-hf-xl-dark.svg)](https://huggingface.co/datasets) ## Powered by Hugging Face [![Share to Community](https://huggingface.co/datasets/huggingface/badges/resolve/main/powered-by-huggingface-light.svg)](https://huggingface.co) [![Share to Community](https://huggingface.co/datasets/huggingface/badges/resolve/main/powered-by-huggingface-dark.svg)](https://huggingface.co)
lavita/medical-qa-shared-task-v1-toy
lavita
"2023-07-20T00:29:06"
941,974
18
[ "size_categories:n<1K", "format:parquet", "modality:tabular", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
null
"2023-07-20T00:28:51"
--- dataset_info: features: - name: id dtype: int64 - name: ending0 dtype: string - name: ending1 dtype: string - name: ending2 dtype: string - name: ending3 dtype: string - name: ending4 dtype: string - name: label dtype: int64 - name: sent1 dtype: string - name: sent2 dtype: string - name: startphrase dtype: string splits: - name: train num_bytes: 52480.01886421694 num_examples: 32 - name: dev num_bytes: 52490.64150943396 num_examples: 32 download_size: 89680 dataset_size: 104970.6603736509 --- # Dataset Card for "medical-qa-shared-task-v1-toy" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
HuggingFaceM4/the_cauldron
HuggingFaceM4
"2024-05-06T13:37:52"
820,029
397
[ "size_categories:1M<n<10M", "format:parquet", "modality:image", "modality:text", "library:datasets", "library:dask", "library:mlcroissant", "library:polars", "arxiv:1603.07396", "arxiv:2206.01718", "arxiv:2208.05358", "arxiv:1612.06890", "arxiv:2310.00367", "arxiv:1710.07300", "arxiv:2312.12241", "arxiv:1912.03098", "arxiv:2211.08545", "arxiv:2306.05425", "arxiv:1709.00103", "arxiv:2003.12462", "arxiv:1612.00837", "arxiv:2205.00363", "arxiv:2403.09029", "arxiv:2405.02246", "region:us" ]
null
"2024-04-11T17:53:57"
--- dataset_info: - config_name: ai2d features: - name: images sequence: image - name: texts list: - name: user dtype: string - name: assistant dtype: string - name: source dtype: string splits: - name: train num_bytes: 435362437.84770346 num_examples: 2434 download_size: 438136609 dataset_size: 435362437.84770346 - config_name: aokvqa features: - name: images sequence: image - name: texts list: - name: user dtype: string - name: assistant dtype: string - name: source dtype: string splits: - name: train num_bytes: 871997710.0 num_examples: 16539 download_size: 893265070 dataset_size: 871997710.0 - config_name: chart2text features: - name: images sequence: image - name: texts list: - name: user dtype: string - name: assistant dtype: string - name: source dtype: string splits: - name: train num_bytes: 1060566797.2728182 num_examples: 26961 download_size: 1103141721 dataset_size: 1060566797.2728182 - config_name: chartqa features: - name: images sequence: image - name: texts list: - name: user dtype: string - name: assistant dtype: string - name: source dtype: string splits: - name: train num_bytes: 784719364.9441738 num_examples: 18265 download_size: 803192402 dataset_size: 784719364.9441738 - config_name: clevr features: - name: images sequence: image - name: texts list: - name: user dtype: string - name: assistant dtype: string - name: source dtype: string splits: - name: train num_bytes: 11522617868.0 num_examples: 70000 download_size: 13267429872 dataset_size: 11522617868.0 - config_name: clevr_math features: - name: images sequence: image - name: texts list: - name: user dtype: string - name: assistant dtype: string - name: source dtype: string splits: - name: train num_bytes: 13308311206.0 num_examples: 70000 download_size: 16315284 dataset_size: 13308311206.0 - config_name: cocoqa features: - name: images sequence: image - name: texts list: - name: user dtype: string - name: assistant dtype: string - name: source dtype: string splits: - name: train num_bytes: 2213960474.0 num_examples: 46287 download_size: 2393991009 dataset_size: 2213960474.0 - config_name: datikz features: - name: images sequence: image - name: texts list: - name: user dtype: string - name: assistant dtype: string - name: source dtype: string splits: - name: train num_bytes: 481233278.0 num_examples: 47974 download_size: 613100257 dataset_size: 481233278.0 - config_name: diagram_image_to_text features: - name: images sequence: image - name: texts list: - name: user dtype: string - name: assistant dtype: string - name: source dtype: string splits: - name: train num_bytes: 18877197.0 num_examples: 300 download_size: 18706661 dataset_size: 18877197.0 - config_name: docvqa features: - name: images sequence: image - name: texts list: - name: user dtype: string - name: assistant dtype: string - name: source dtype: string splits: - name: train num_bytes: 6885686042.0 num_examples: 10189 download_size: 6887803845 dataset_size: 6885686042.0 - config_name: dvqa features: - name: images sequence: image - name: texts list: - name: user dtype: string - name: assistant dtype: string - name: source dtype: string splits: - name: train num_bytes: 3689940101.0 num_examples: 200000 download_size: 4295254110 dataset_size: 3689940101.0 - config_name: figureqa features: - name: images sequence: image - name: texts list: - name: user dtype: string - name: assistant dtype: string - name: source dtype: string splits: - name: train num_bytes: 1901887152.0 num_examples: 100000 download_size: 2220036667 dataset_size: 1901887152.0 - config_name: finqa features: - name: images sequence: image - name: texts list: - name: user dtype: string - name: assistant dtype: string - name: source dtype: string splits: - name: train num_bytes: 135268568.0 num_examples: 5276 download_size: 123698250 dataset_size: 135268568.0 - config_name: geomverse features: - name: images sequence: image - name: texts list: - name: user dtype: string - name: assistant dtype: string - name: source dtype: string splits: - name: train num_bytes: 951640204.0 num_examples: 9303 download_size: 323746516 dataset_size: 951640204.0 - config_name: hateful_memes features: - name: images sequence: image - name: texts list: - name: user dtype: string - name: assistant dtype: string - name: source dtype: string splits: - name: train num_bytes: 3035059823.0 num_examples: 8500 download_size: 3054208907 dataset_size: 3035059823.0 - config_name: hitab features: - name: images sequence: image - name: texts list: - name: user dtype: string - name: assistant dtype: string - name: source dtype: string splits: - name: train num_bytes: 161130580.0 num_examples: 2500 download_size: 158295807 dataset_size: 161130580.0 - config_name: iam features: - name: images sequence: image - name: texts list: - name: user dtype: string - name: assistant dtype: string - name: source dtype: string splits: - name: train num_bytes: 1129180352.0 num_examples: 5663 download_size: 1128935602 dataset_size: 1129180352.0 - config_name: iconqa features: - name: images sequence: image - name: texts list: - name: user dtype: string - name: assistant dtype: string - name: source dtype: string splits: - name: train num_bytes: 264513634.7170419 num_examples: 27307 download_size: 326674337 dataset_size: 264513634.7170419 - config_name: infographic_vqa features: - name: images sequence: image - name: texts list: - name: user dtype: string - name: assistant dtype: string - name: source dtype: string splits: - name: train num_bytes: 291677986.0 num_examples: 2118 download_size: 292351760 dataset_size: 291677986.0 - config_name: intergps features: - name: images sequence: image - name: texts list: - name: user dtype: string - name: assistant dtype: string - name: source dtype: string splits: - name: train num_bytes: 24982328.291771192 num_examples: 1280 download_size: 24870320 dataset_size: 24982328.291771192 - config_name: localized_narratives features: - name: images sequence: image - name: texts list: - name: user dtype: string - name: assistant dtype: string - name: source dtype: string splits: - name: train num_bytes: 21380844262.41927 num_examples: 199998 download_size: 22164342699 dataset_size: 21380844262.41927 - config_name: mapqa features: - name: images sequence: image - name: texts list: - name: user dtype: string - name: assistant dtype: string - name: source dtype: string splits: - name: train num_bytes: 3238062926.0 num_examples: 37417 download_size: 3307676486 dataset_size: 3238062926.0 - config_name: mimic_cgd features: - name: images sequence: image - name: texts list: - name: user dtype: string - name: assistant dtype: string - name: source dtype: string splits: - name: train num_bytes: 12592929433.0 num_examples: 70939 download_size: 13147641100 dataset_size: 12592929433.0 - config_name: multihiertt features: - name: images sequence: image - name: texts list: - name: user dtype: string - name: assistant dtype: string - name: source dtype: string splits: - name: train num_bytes: 1356766489.046 num_examples: 7619 download_size: 1360814135 dataset_size: 1356766489.046 - config_name: nlvr2 features: - name: images sequence: image - name: texts list: - name: user dtype: string - name: assistant dtype: string - name: source dtype: string splits: - name: train num_bytes: 8375492591.0 num_examples: 50426 download_size: 10838882020 dataset_size: 8375492591.0 - config_name: ocrvqa features: - name: images sequence: image - name: texts list: - name: user dtype: string - name: assistant dtype: string - name: source dtype: string splits: - name: train num_bytes: 5467134439.0 num_examples: 165746 download_size: 6078073015 dataset_size: 5467134439.0 - config_name: okvqa features: - name: images sequence: image - name: texts list: - name: user dtype: string - name: assistant dtype: string - name: source dtype: string splits: - name: train num_bytes: 281454288182.492 num_examples: 9009 download_size: 3009062 dataset_size: 281454288182.492 - config_name: plotqa features: - name: images sequence: image - name: texts list: - name: user dtype: string - name: assistant dtype: string - name: source dtype: string splits: - name: train num_bytes: 7837605221.0 num_examples: 157070 download_size: 5320249066 dataset_size: 7837605221.0 - config_name: raven features: - name: images sequence: image - name: texts list: - name: user dtype: string - name: assistant dtype: string - name: source dtype: string splits: - name: train num_bytes: 1506550467.0 num_examples: 42000 download_size: 1720691636 dataset_size: 1506550467.0 - config_name: rendered_text features: - name: images sequence: image - name: texts list: - name: user dtype: string - name: assistant dtype: string - name: source dtype: string splits: - name: train num_bytes: 11086896502.0 num_examples: 10000 download_size: 11086960376 dataset_size: 11086896502.0 - config_name: robut_sqa features: - name: images sequence: image - name: texts list: - name: user dtype: string - name: assistant dtype: string - name: source dtype: string splits: - name: train num_bytes: 679135952.0 num_examples: 8514 download_size: 678722272 dataset_size: 679135952.0 - config_name: robut_wikisql features: - name: images sequence: image - name: texts list: - name: user dtype: string - name: assistant dtype: string - name: source dtype: string splits: - name: train num_bytes: 5950915477.0 num_examples: 74989 download_size: 6160300141 dataset_size: 5950915477.0 - config_name: robut_wtq features: - name: images sequence: image - name: texts list: - name: user dtype: string - name: assistant dtype: string - name: source dtype: string splits: - name: train num_bytes: 4023729236.0 num_examples: 38246 download_size: 4061523247 dataset_size: 4023729236.0 - config_name: scienceqa features: - name: images sequence: image - name: texts list: - name: user dtype: string - name: assistant dtype: string - name: source dtype: string splits: - name: train num_bytes: 284601898.76188564 num_examples: 4976 download_size: 283265438 dataset_size: 284601898.76188564 - config_name: screen2words features: - name: images sequence: image - name: texts list: - name: user dtype: string - name: assistant dtype: string - name: source dtype: string splits: - name: train num_bytes: 1670723783.0 num_examples: 15730 download_size: 1346254268 dataset_size: 1670723783.0 - config_name: spot_the_diff features: - name: images sequence: image - name: texts list: - name: user dtype: string - name: assistant dtype: string - name: source dtype: string splits: - name: train num_bytes: 1643123792.0 num_examples: 8566 download_size: 1526740548 dataset_size: 1643123792.0 - config_name: st_vqa features: - name: images sequence: image - name: texts list: - name: user dtype: string - name: assistant dtype: string - name: source dtype: string splits: - name: train num_bytes: 696265340.0 num_examples: 17247 download_size: 720462890 dataset_size: 696265340.0 - config_name: tabmwp features: - name: images sequence: image - name: texts list: - name: user dtype: string - name: assistant dtype: string - name: source dtype: string splits: - name: train num_bytes: 265337140.19648907 num_examples: 22722 download_size: 306643610 dataset_size: 265337140.19648907 - config_name: tallyqa features: - name: images sequence: image - name: texts list: - name: user dtype: string - name: assistant dtype: string - name: source dtype: string splits: - name: train num_bytes: 4267143189.0 num_examples: 98680 download_size: 4662245152 dataset_size: 4267143189.0 - config_name: tat_qa features: - name: images sequence: image - name: texts list: - name: user dtype: string - name: assistant dtype: string - name: source dtype: string splits: - name: train num_bytes: 73213942.0 num_examples: 2199 download_size: 70862028 dataset_size: 73213942.0 - config_name: textcaps features: - name: images sequence: image - name: texts list: - name: user dtype: string - name: assistant dtype: string - name: source dtype: string splits: - name: train num_bytes: 5938676115.0 num_examples: 21953 download_size: 6175419911 dataset_size: 5938676115.0 - config_name: textvqa features: - name: images sequence: image - name: texts list: - name: user dtype: string - name: assistant dtype: string - name: source dtype: string splits: - name: train num_bytes: 5939437331.0 num_examples: 21953 download_size: 6175442839 dataset_size: 5939437331.0 - config_name: tqa features: - name: images sequence: image - name: texts list: - name: user dtype: string - name: assistant dtype: string - name: source dtype: string splits: - name: train num_bytes: 380346870.806369 num_examples: 1493 download_size: 378238311 dataset_size: 380346870.806369 - config_name: vistext features: - name: images sequence: image - name: texts list: - name: user dtype: string - name: assistant dtype: string - name: source dtype: string splits: - name: train num_bytes: 541250281.0 num_examples: 9969 download_size: 386023352 dataset_size: 541250281.0 - config_name: visual7w features: - name: images sequence: image - name: texts list: - name: user dtype: string - name: assistant dtype: string - name: source dtype: string splits: - name: train num_bytes: 4432168161.0 num_examples: 14366 download_size: 4443083495 dataset_size: 4432168161.0 - config_name: visualmrc features: - name: images sequence: image - name: texts list: - name: user dtype: string - name: assistant dtype: string - name: source dtype: string splits: - name: train num_bytes: 2941051627.2639995 num_examples: 3027 download_size: 2912911810 dataset_size: 2941051627.2639995 - config_name: vqarad features: - name: images sequence: image - name: texts list: - name: user dtype: string - name: assistant dtype: string - name: source dtype: string splits: - name: train num_bytes: 16561537.0 num_examples: 313 download_size: 16226241 dataset_size: 16561537.0 - config_name: vqav2 features: - name: images sequence: image - name: texts list: - name: user dtype: string - name: assistant dtype: string - name: source dtype: string splits: - name: train num_bytes: 10630091683.0 num_examples: 82772 download_size: 13479302437 dataset_size: 10630091683.0 - config_name: vsr features: - name: images sequence: image - name: texts list: - name: user dtype: string - name: assistant dtype: string - name: source dtype: string splits: - name: train num_bytes: 107489763.0 num_examples: 2157 download_size: 107576214 dataset_size: 107489763.0 - config_name: websight features: - name: images sequence: image - name: texts list: - name: user dtype: string - name: assistant dtype: string - name: source dtype: string splits: - name: train num_bytes: 2011365901.0 num_examples: 10000 download_size: 1601222161 dataset_size: 2011365901.0 configs: - config_name: ai2d data_files: - split: train path: ai2d/train-* - config_name: aokvqa data_files: - split: train path: aokvqa/train-* - config_name: chart2text data_files: - split: train path: chart2text/train-* - config_name: chartqa data_files: - split: train path: chartqa/train-* - config_name: clevr data_files: - split: train path: clevr/train-* - config_name: clevr_math data_files: - split: train path: clevr_math/train-* - config_name: cocoqa data_files: - split: train path: cocoqa/train-* - config_name: datikz data_files: - split: train path: datikz/train-* - config_name: diagram_image_to_text data_files: - split: train path: diagram_image_to_text/train-* - config_name: docvqa data_files: - split: train path: docvqa/train-* - config_name: dvqa data_files: - split: train path: dvqa/train-* - config_name: figureqa data_files: - split: train path: figureqa/train-* - config_name: finqa data_files: - split: train path: finqa/train-* - config_name: geomverse data_files: - split: train path: geomverse/train-* - config_name: hateful_memes data_files: - split: train path: hateful_memes/train-* - config_name: hitab data_files: - split: train path: hitab/train-* - config_name: iam data_files: - split: train path: iam/train-* - config_name: iconqa data_files: - split: train path: iconqa/train-* - config_name: infographic_vqa data_files: - split: train path: infographic_vqa/train-* - config_name: intergps data_files: - split: train path: intergps/train-* - config_name: localized_narratives data_files: - split: train path: localized_narratives/train-* - config_name: mapqa data_files: - split: train path: mapqa/train-* - config_name: mimic_cgd data_files: - split: train path: mimic_cgd/train-* - config_name: multihiertt data_files: - split: train path: multihiertt/train-* - config_name: nlvr2 data_files: - split: train path: nlvr2/train-* - config_name: ocrvqa data_files: - split: train path: ocrvqa/train-* - config_name: okvqa data_files: - split: train path: okvqa/train-* - config_name: plotqa data_files: - split: train path: plotqa/train-* - config_name: raven data_files: - split: train path: raven/train-* - config_name: rendered_text data_files: - split: train path: rendered_text/train-* - config_name: robut_sqa data_files: - split: train path: robut_sqa/train-* - config_name: robut_wikisql data_files: - split: train path: robut_wikisql/train-* - config_name: robut_wtq data_files: - split: train path: robut_wtq/train-* - config_name: scienceqa data_files: - split: train path: scienceqa/train-* - config_name: screen2words data_files: - split: train path: screen2words/train-* - config_name: spot_the_diff data_files: - split: train path: spot_the_diff/train-* - config_name: st_vqa data_files: - split: train path: st_vqa/train-* - config_name: tabmwp data_files: - split: train path: tabmwp/train-* - config_name: tallyqa data_files: - split: train path: tallyqa/train-* - config_name: tat_qa data_files: - split: train path: tat_qa/train-* - config_name: textcaps data_files: - split: train path: textcaps/train-* - config_name: textvqa data_files: - split: train path: textvqa/train-* - config_name: tqa data_files: - split: train path: tqa/train-* - config_name: vistext data_files: - split: train path: vistext/train-* - config_name: visual7w data_files: - split: train path: visual7w/train-* - config_name: visualmrc data_files: - split: train path: visualmrc/train-* - config_name: vqarad data_files: - split: train path: vqarad/train-* - config_name: vqav2 data_files: - split: train path: vqav2/train-* - config_name: vsr data_files: - split: train path: vsr/train-* - config_name: websight data_files: - split: train path: websight/train-* --- # Dataset Card for The Cauldron ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6177322d37f32ecb1e2d4cdf/3q8wnTYvCWyFiCGn2q1OX.png) ## Dataset description The Cauldron is part of the Idefics2 release. It is a massive collection of 50 vision-language datasets (training sets only) that were used for the fine-tuning of the vision-language model Idefics2. ## Load the dataset To load the dataset, install the library `datasets` with `pip install datasets`. Then, ``` from datasets import load_dataset ds = load_dataset("HuggingFaceM4/the_cauldron", "ai2d") ``` to download and load the config `ai2d` for example. ## Data fields An example of a sample looks as follows: ``` { "images" = [PIL.Image] "texts" = [ { "user": "Question: How many actions are depicted in the diagram?\nChoices:\nA. 6.\nB. 4.\nC. 8.\nD. 7.\nAnswer with the letter.", "assistant": "Answer: D", "source": "TQA" } ] } ``` In `images`, there is a list of images, to be placed before the text. In `texts`, there is a conversation between a user and an assistant about the images that is represented by a list of turns. ## Stats about the datasets in The Cauldron | Dataset | # images | # Q/A pairs | # tokens | |----------------------|----------|-------------|------------| | *General visual question answering* | | VQAv2 | 82,772 | 443,757 | 1,595,929 | | COCO-QA | 46,287 | 78,736 | 286,982 | | Visual7W | 14,366 | 69,817 | 279,268 | | A-OKVQA | 16,539 | 17,056 | 236,492 | | TallyQA | 98,680 | 183,986 | 738,254 | | OK-VQA | 8,998 | 9,009 | 38,853 | | HatefulMemes | 8,500 | 8,500 | 25,500 | | VQA-RAD | 313 | 1,793 | 8,418 | | Captioning | | LNarratives | 507,444 | 507,444 | 21,328,731 | | Screen2Words | 15,730 | 15,743 | 143,103 | | VSR | 2,157 | 3,354 | 10,062 | | *OCR, document understanding, text transcription* | | RenderedText | 999,000 | 999,000 | 27,207,774 | | DocVQA | 10,189 | 39,463 | 337,829 | | TextCaps | 21,953 | 21,953 | 389,658 | | TextVQA | 21,953 | 34,602 | 181,918 | | ST-VQA | 17,247 | 23,121 | 127,846 | | OCR-VQA | 165,746 | 801,579 | 6,073,824 | | VisualMRC | 3,027 | 11,988 | 168,828 | | IAM | 5,663 | 5,663 | 144,216 | | InfoVQA | 2,118 | 10,074 | 61,048 | | Diagram image-to-text| 300 | 300 | 22,196 | | *Chart/figure understanding* | | Chart2Text | 26,985 | 30,242 | 2,852,827 | | DVQA | 200,000 | 2,325,316 | 8,346,234 | | VisText | 7,057 | 9,969 | 1,245,485 | | ChartQA | 18,271 | 28,299 | 185,835 | | PlotQA | 157,070 | 20,249,479 | 8478299.278| | FigureQA | 100,000 | 1,327,368 | 3,982,104 | | MapQA | 37,417 | 483,416 | 6,470,485 | | *Table understanding* | | TabMWP | 22,729 | 23,059 | 1,948,166 | | TAT-QA | 2,199 | 13,215 | 283,776 | | HiTab | 2,500 | 7,782 | 351,299 | | MultiHiertt | 7,619 | 7,830 | 267,615 | | FinQA | 5,276 | 6,251 | 242,561 | | WikiSQL | 74,989 | 86,202 | 9,680,673 | | SQA | 8,514 | 34,141 | 1,894,824 | | WTQ | 38,246 | 44,096 | 6,677,013 | | *Reasoning, logic, maths* | | GeomVerse | 9,303 | 9,339 | 2,489,459 | | CLEVR-Math | 70,000 | 788,650 | 3,184,656 | | CLEVR | 70,000 | 699,989 | 2,396,781 | | IconQA | 27,315 | 29,859 | 112,969 | | RAVEN | 42,000 | 42,000 | 105,081 | | Inter-GPs | 1,451 | 2,101 | 8,404 | | *Textbook/academic questions* | | AI2D | 3,099 | 9,708 | 38,832 | | TQA | 1,496 | 6,501 | 26,004 | | ScienceQA | 4,985 | 6,218 | 24,872 | | *Differences between 2 images* | | NLVR2 | 50,426 | 86,373 | 259,119 | | GSD | 70,939 | 141,869 | 4,637,229 | | Spot the diff | 8,566 | 9,524 | 221,477 | | *Screenshot to code* | | WebSight | 500,000 | 500,000 | 276,743,299| | DaTikz | 47,974 | 48,296 | 59,556,252 | ## Decontamination The Cauldron contains only the train split of each sub-datasets. On top of that, we removed the few examples containing an image also present in the test splits of MMMU, MathVista or MMBench. ## References to the original datasets <details> <summary>References to the original datasets</summary> @misc{AI2D, title={A Diagram Is Worth A Dozen Images}, author={Aniruddha Kembhavi and Mike Salvato and Eric Kolve and Minjoon Seo and Hannaneh Hajishirzi and Ali Farhadi}, year={2016}, eprint={1603.07396}, archivePrefix={arXiv}, primaryClass={cs.CV} } @misc{A-OKVQA, title={A-OKVQA: A Benchmark for Visual Question Answering using World Knowledge}, author={Dustin Schwenk and Apoorv Khandelwal and Christopher Clark and Kenneth Marino and Roozbeh Mottaghi}, year={2022}, eprint={2206.01718}, archivePrefix={arXiv}, primaryClass={cs.CV} } @inproceedings{Chart2Text, title = "Chart-to-Text: Generating Natural Language Descriptions for Charts by Adapting the Transformer Model", author = "Obeid, Jason and Hoque, Enamul", editor = "Davis, Brian and Graham, Yvette and Kelleher, John and Sripada, Yaji", booktitle = "Proceedings of the 13th International Conference on Natural Language Generation", month = dec, year = "2020", address = "Dublin, Ireland", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2020.inlg-1.20", doi = "10.18653/v1/2020.inlg-1.20", pages = "138--147", } @inproceedings{ChartQA, title = "{C}hart{QA}: A Benchmark for Question Answering about Charts with Visual and Logical Reasoning", author = "Masry, Ahmed and Long, Do and Tan, Jia Qing and Joty, Shafiq and Hoque, Enamul", booktitle = "Findings of the Association for Computational Linguistics: ACL 2022", month = may, year = "2022", address = "Dublin, Ireland", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2022.findings-acl.177", doi = "10.18653/v1/2022.findings-acl.177", pages = "2263--2279", } @misc{CLEVR-Math, doi = {10.48550/ARXIV.2208.05358}, url = {https://arxiv.org/abs/2208.05358}, author = {Lindström, Adam Dahlgren}, keywords = {Machine Learning (cs.LG), Computation and Language (cs.CL), Computer Vision and Pattern Recognition (cs.CV), FOS: Computer and information sciences, FOS: Computer and information sciences, I.2.7; I.2.10; I.2.6; I.4.8; I.1.4}, title = {CLEVR-Math: A Dataset for Compositional Language, Visual, and Mathematical Reasoning}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution Share Alike 4.0 International} } @misc{CLEVR, title={CLEVR: A Diagnostic Dataset for Compositional Language and Elementary Visual Reasoning}, author={Justin Johnson and Bharath Hariharan and Laurens van der Maaten and Li Fei-Fei and C. Lawrence Zitnick and Ross Girshick}, year={2016}, eprint={1612.06890}, archivePrefix={arXiv}, primaryClass={cs.CV} } @inproceedings{CocoQA, author = {Ren, Mengye and Kiros, Ryan and Zemel, Richard}, booktitle = {Advances in Neural Information Processing Systems}, editor = {C. Cortes and N. Lawrence and D. Lee and M. Sugiyama and R. Garnett}, pages = {}, publisher = {Curran Associates, Inc.}, title = {Exploring Models and Data for Image Question Answering}, url = {https://proceedings.neurips.cc/paper_files/paper/2015/file/831c2f88a604a07ca94314b56a4921b8-Paper.pdf}, volume = {28}, year = {2015} } @misc{DaTikz, title={AutomaTikZ: Text-Guided Synthesis of Scientific Vector Graphics with TikZ}, author={Jonas Belouadi and Anne Lauscher and Steffen Eger}, year={2024}, eprint={2310.00367}, archivePrefix={arXiv}, primaryClass={cs.CL} } Diagram image to text: https://huggingface.co/datasets/Kamizuru00/diagram_image_to_text by @Kamizuru00 @INPROCEEDINGS{DocVQA, author={Mathew, Minesh and Karatzas, Dimosthenis and Jawahar, C. V.}, booktitle={2021 IEEE Winter Conference on Applications of Computer Vision (WACV)}, title={DocVQA: A Dataset for VQA on Document Images}, year={2021}, volume={}, number={}, pages={2199-2208}, keywords={Visualization;Computer vision;Text analysis;Image recognition;Image analysis;Conferences;Layout}, doi={10.1109/WACV48630.2021.00225}} @inproceedings{DVQA, title={DVQA: Understanding Data Visualizations via Question Answering}, author={Kafle, Kushal and Cohen, Scott and Price, Brian and Kanan, Christopher}, booktitle={CVPR}, year={2018} } @misc{FigureQA, title={FigureQA: An Annotated Figure Dataset for Visual Reasoning}, author={Samira Ebrahimi Kahou and Vincent Michalski and Adam Atkinson and Akos Kadar and Adam Trischler and Yoshua Bengio}, year={2018}, eprint={1710.07300}, archivePrefix={arXiv}, primaryClass={cs.CV} } @inproceedings{FinQA, title = "{F}in{QA}: A Dataset of Numerical Reasoning over Financial Data", author = "Chen, Zhiyu and Chen, Wenhu and Smiley, Charese and Shah, Sameena and Borova, Iana and Langdon, Dylan and Moussa, Reema and Beane, Matt and Huang, Ting-Hao and Routledge, Bryan and Wang, William Yang", editor = "Moens, Marie-Francine and Huang, Xuanjing and Specia, Lucia and Yih, Scott Wen-tau", booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2021", address = "Online and Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.emnlp-main.300", doi = "10.18653/v1/2021.emnlp-main.300", pages = "3697--3711", } @misc{GeomVerse, title={GeomVerse: A Systematic Evaluation of Large Models for Geometric Reasoning}, author={Mehran Kazemi and Hamidreza Alvari and Ankit Anand and Jialin Wu and Xi Chen and Radu Soricut}, year={2023}, eprint={2312.12241}, archivePrefix={arXiv}, primaryClass={cs.CV} } @inproceedings{hatefulmeme, author = {Kiela, Douwe and Firooz, Hamed and Mohan, Aravind and Goswami, Vedanuj and Singh, Amanpreet and Ringshia, Pratik and Testuggine, Davide}, booktitle = {Advances in Neural Information Processing Systems}, editor = {H. Larochelle and M. Ranzato and R. Hadsell and M.F. Balcan and H. Lin}, pages = {2611--2624}, publisher = {Curran Associates, Inc.}, title = {The Hateful Memes Challenge: Detecting Hate Speech in Multimodal Memes}, url = {https://proceedings.neurips.cc/paper_files/paper/2020/file/1b84c4cee2b8b3d823b30e2d604b1878-Paper.pdf}, volume = {33}, year = {2020} } @inproceedings{Hitab, title = "{H}i{T}ab: A Hierarchical Table Dataset for Question Answering and Natural Language Generation", author = "Cheng, Zhoujun and Dong, Haoyu and Wang, Zhiruo and Jia, Ran and Guo, Jiaqi and Gao, Yan and Han, Shi and Lou, Jian-Guang and Zhang, Dongmei", editor = "Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline", booktitle = "Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = may, year = "2022", address = "Dublin, Ireland", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2022.acl-long.78", doi = "10.18653/v1/2022.acl-long.78", pages = "1094--1110", } @article{IAM, author = {Marti, Urs-Viktor and Bunke, H.}, year = {2002}, month = {11}, pages = {39-46}, title = {The IAM-database: An English sentence database for offline handwriting recognition}, volume = {5}, journal = {International Journal on Document Analysis and Recognition}, doi = {10.1007/s100320200071} } @inproceedings{IconQA, title = {IconQA: A New Benchmark for Abstract Diagram Understanding and Visual Language Reasoning}, author = {Lu, Pan and Qiu, Liang and Chen, Jiaqi and Xia, Tony and Zhao, Yizhou and Zhang, Wei and Yu, Zhou and Liang, Xiaodan and Zhu, Song-Chun}, booktitle = {The 35th Conference on Neural Information Processing Systems (NeurIPS) Track on Datasets and Benchmarks}, year = {2021} } @INPROCEEDINGS{InfographicVQA, author={Mathew, Minesh and Bagal, Viraj and Tito, Rubèn and Karatzas, Dimosthenis and Valveny, Ernest and Jawahar, C. V.}, booktitle={2022 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)}, title={InfographicVQA}, year={2022}, volume={}, number={}, pages={2582-2591}, keywords={Visualization;Computer vision;Computational modeling;Layout;Data visualization;Benchmark testing;Brain modeling;Document Analysis Datasets;Evaluation and Comparison of Vision Algorithms;Vision and Languages}, doi={10.1109/WACV51458.2022.00264} } @inproceedings{Inter-GPS, title = {Inter-GPS: Interpretable Geometry Problem Solving with Formal Language and Symbolic Reasoning}, author = {Lu, Pan and Gong, Ran and Jiang, Shibiao and Qiu, Liang and Huang, Siyuan and Liang, Xiaodan and Zhu, Song-Chun}, booktitle = {The Joint Conference of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (ACL-IJCNLP 2021)}, year = {2021} } @misc{LocalizedNarratives, title={Connecting Vision and Language with Localized Narratives}, author={Jordi Pont-Tuset and Jasper Uijlings and Soravit Changpinyo and Radu Soricut and Vittorio Ferrari}, year={2020}, eprint={1912.03098}, archivePrefix={arXiv}, primaryClass={cs.CV} } @misc{MapQA, title={MapQA: A Dataset for Question Answering on Choropleth Maps}, author={Shuaichen Chang and David Palzer and Jialin Li and Eric Fosler-Lussier and Ningchuan Xiao}, year={2022}, eprint={2211.08545}, archivePrefix={arXiv}, primaryClass={cs.CV} } @misc{MIMIC-IT-General-Scene-Difference, title={MIMIC-IT: Multi-Modal In-Context Instruction Tuning}, author={Bo Li and Yuanhan Zhang and Liangyu Chen and Jinghao Wang and Fanyi Pu and Jingkang Yang and Chunyuan Li and Ziwei Liu}, year={2023}, eprint={2306.05425}, archivePrefix={arXiv}, primaryClass={cs.CV} } @inproceedings{Multihiertt, title = "{M}ulti{H}iertt: Numerical Reasoning over Multi Hierarchical Tabular and Textual Data", author = "Zhao, Yilun and Li, Yunxiang and Li, Chenying and Zhang, Rui", booktitle = "Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = may, year = "2022", address = "Dublin, Ireland", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2022.acl-long.454", pages = "6588--6600", } @inproceedings{NLVR2, title = "A Corpus for Reasoning about Natural Language Grounded in Photographs", author = "Suhr, Alane and Zhou, Stephanie and Zhang, Ally and Zhang, Iris and Bai, Huajun and Artzi, Yoav", editor = "Korhonen, Anna and Traum, David and M{\`a}rquez, Llu{\'\i}s", booktitle = "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", month = jul, year = "2019", address = "Florence, Italy", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/P19-1644", doi = "10.18653/v1/P19-1644", pages = "6418--6428", } @INPROCEEDINGS{OCR-VQA, author={Mishra, Anand and Shekhar, Shashank and Singh, Ajeet Kumar and Chakraborty, Anirban}, booktitle={2019 International Conference on Document Analysis and Recognition (ICDAR)}, title={OCR-VQA: Visual Question Answering by Reading Text in Images}, year={2019}, volume={}, number={}, pages={947-952}, keywords={Optical character recognition software;Visualization;Task analysis;Knowledge discovery;Text analysis;Text recognition;Character recognition;Optical Character Recognition (OCR), Visual Question Answering (VQA), Document image analysis, textVQA}, doi={10.1109/ICDAR.2019.00156} } @InProceedings{okvqa, author = {Kenneth Marino and Mohammad Rastegari and Ali Farhadi and Roozbeh Mottaghi}, title = {OK-VQA: A Visual Question Answering Benchmark Requiring External Knowledge}, booktitle = {Conference on Computer Vision and Pattern Recognition (CVPR)}, year = {2019}, } @InProceedings{PlotQA, author = {Methani, Nitesh and Ganguly, Pritha and Khapra, Mitesh M. and Kumar, Pratyush}, title = {PlotQA: Reasoning over Scientific Plots}, booktitle = {The IEEE Winter Conference on Applications of Computer Vision (WACV)}, month = {March}, year = {2020} } @inproceedings{RAVEN, title={RAVEN: A Dataset for Relational and Analogical Visual rEasoNing}, author={Zhang, Chi and Gao, Feng and Jia, Baoxiong and Zhu, Yixin and Zhu, Song-Chun}, booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)}, year={2019} } RenderedText: https://huggingface.co/datasets/wendlerc/RenderedText by @wendlerc @inproceedings{Robut, title = "{R}obu{T}: A Systematic Study of Table {QA} Robustness Against Human-Annotated Adversarial Perturbations", author = "Zhao, Yilun and Zhao, Chen and Nan, Linyong and Qi, Zhenting and Zhang, Wenlin and Tang, Xiangru and Mi, Boyu and Radev, Dragomir", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-long.334", doi = "10.18653/v1/2023.acl-long.334", pages = "6064--6081", } @inproceedings{SQA, title = "Search-based Neural Structured Learning for Sequential Question Answering", author = "Iyyer, Mohit and Yih, Wen-tau and Chang, Ming-Wei", editor = "Barzilay, Regina and Kan, Min-Yen", booktitle = "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = jul, year = "2017", address = "Vancouver, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/P17-1167", doi = "10.18653/v1/P17-1167", pages = "1821--1831", } @misc{WikiSQL, title={Seq2SQL: Generating Structured Queries from Natural Language using Reinforcement Learning}, author={Victor Zhong and Caiming Xiong and Richard Socher}, year={2017}, eprint={1709.00103}, archivePrefix={arXiv}, primaryClass={cs.CL} } @inproceedings{WTQ, title = "Compositional Semantic Parsing on Semi-Structured Tables", author = "Pasupat, Panupong and Liang, Percy", editor = "Zong, Chengqing and Strube, Michael", booktitle = "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)", month = jul, year = "2015", address = "Beijing, China", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/P15-1142", doi = "10.3115/v1/P15-1142", pages = "1470--1480", } @inproceedings{ScienceQA, author = {Lu, Pan and Mishra, Swaroop and Xia, Tanglin and Qiu, Liang and Chang, Kai-Wei and Zhu, Song-Chun and Tafjord, Oyvind and Clark, Peter and Kalyan, Ashwin}, booktitle = {Advances in Neural Information Processing Systems}, editor = {S. Koyejo and S. Mohamed and A. Agarwal and D. Belgrave and K. Cho and A. Oh}, pages = {2507--2521}, publisher = {Curran Associates, Inc.}, title = {Learn to Explain: Multimodal Reasoning via Thought Chains for Science Question Answering}, url = {https://proceedings.neurips.cc/paper_files/paper/2022/file/11332b6b6cf4485b84afadb1352d3a9a-Paper-Conference.pdf}, volume = {35}, year = {2022} } @inproceedings{screen2words, author = {Wang, Bryan and Li, Gang and Zhou, Xin and Chen, Zhourong and Grossman, Tovi and Li, Yang}, title = {Screen2Words: Automatic Mobile UI Summarization with Multimodal Learning}, year = {2021}, isbn = {9781450386357}, publisher = {Association for Computing Machinery}, address = {New York, NY, USA}, url = {https://doi.org/10.1145/3472749.3474765}, doi = {10.1145/3472749.3474765}, booktitle = {The 34th Annual ACM Symposium on User Interface Software and Technology}, pages = {498–510}, numpages = {13}, keywords = {Mobile UI summarization, dataset., deep learning, language-based UI, screen understanding}, location = {Virtual Event, USA}, series = {UIST '21} } @inproceedings{SpotTheDiff, title = "Learning to Describe Differences Between Pairs of Similar Images", author = "Jhamtani, Harsh and others", editor = "Riloff, Ellen and Chiang, David and Hockenmaier, Julia and Tsujii, Jun{'}ichi", booktitle = "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", month = oct # "-" # nov, year = "2018", address = "Brussels, Belgium", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/D18-1436", doi = "10.18653/v1/D18-1436", pages = "4024--4034", } @INPROCEEDINGS{STVQA, author={Biten, Ali Furkan and Tito, Rubèn and Mafla, Andrés and Gomez, Lluis and Rusiñol, Marçal and Jawahar, C.V. and Valveny, Ernest and Karatzas, Dimosthenis}, booktitle={2019 IEEE/CVF International Conference on Computer Vision (ICCV)}, title={Scene Text Visual Question Answering}, year={2019}, volume={}, number={}, pages={4290-4300}, keywords={Visualization;Task analysis;Knowledge discovery;Text recognition;Cognition;Computer vision;Semantics}, doi={10.1109/ICCV.2019.00439} } @inproceedings{TabMWP, title={Dynamic Prompt Learning via Policy Gradient for Semi-structured Mathematical Reasoning}, author={Lu, Pan and Qiu, Liang and Chang, Kai-Wei and Wu, Ying Nian and Zhu, Song-Chun and Rajpurohit, Tanmay and Clark, Peter and Kalyan, Ashwin}, booktitle={International Conference on Learning Representations (ICLR)}, year={2023} } @inproceedings{TallyQA, title={TallyQA: Answering Complex Counting Questions}, author={Acharya, Manoj and Kafle, Kushal and Kanan, Christopher}, booktitle={AAAI}, year={2019} } @inproceedings{TAT-QA, title = "{TAT}-{QA}: A Question Answering Benchmark on a Hybrid of Tabular and Textual Content in Finance", author = "Zhu, Fengbin and Lei, Wenqiang and Huang, Youcheng and Wang, Chao and Zhang, Shuo and Lv, Jiancheng and Feng, Fuli and Chua, Tat-Seng", booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)", month = aug, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.acl-long.254", doi = "10.18653/v1/2021.acl-long.254", pages = "3277--3287" } @misc{textcaps, title={TextCaps: a Dataset for Image Captioning with Reading Comprehension}, author={Oleksii Sidorov and Ronghang Hu and Marcus Rohrbach and Amanpreet Singh}, year={2020}, eprint={2003.12462}, archivePrefix={arXiv}, primaryClass={cs.CV} } @inproceedings{textvqa, title={Towards VQA Models That Can Read}, author={Singh, Amanpreet and Natarjan, Vivek and Shah, Meet and Jiang, Yu and Chen, Xinlei and Parikh, Devi and Rohrbach, Marcus}, booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition}, pages={8317-8326}, year={2019} } @INPROCEEDINGS{TQA, author={Kembhavi, Aniruddha and Seo, Minjoon and Schwenk, Dustin and Choi, Jonghyun and Farhadi, Ali and Hajishirzi, Hannaneh}, booktitle={2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)}, title={Are You Smarter Than a Sixth Grader? Textbook Question Answering for Multimodal Machine Comprehension}, year={2017}, volume={}, number={}, pages={5376-5384}, keywords={Knowledge discovery;Visualization;Cognition;Training;Natural languages;Computer vision}, doi={10.1109/CVPR.2017.571} } @inproceedings{VisText, title = {{VisText: A Benchmark for Semantically Rich Chart Captioning}}, author = {Benny J. Tang AND Angie Boggust AND Arvind Satyanarayan}, booktitle = {The Annual Meeting of the Association for Computational Linguistics (ACL)}, year = {2023}, url = {http://vis.csail.mit.edu/pubs/vistext} } @InProceedings{Visual7w, title = {{Visual7W: Grounded Question Answering in Images}}, author = {Yuke Zhu and Oliver Groth and Michael Bernstein and Li Fei-Fei}, booktitle = {{IEEE Conference on Computer Vision and Pattern Recognition}}, year = 2016, } @inproceedings{VisualMRC, author = {Ryota Tanaka and Kyosuke Nishida and Sen Yoshida}, title = {VisualMRC: Machine Reading Comprehension on Document Images}, booktitle = {AAAI}, year = {2021} } @article{VQA-RAD, author = {Lau, Jason and Gayen, Soumya and Ben Abacha, Asma and Demner-Fushman, Dina}, year = {2018}, month = {11}, pages = {180251}, title = {A dataset of clinically generated visual questions and answers about radiology images}, volume = {5}, journal = {Scientific Data}, doi = {10.1038/sdata.2018.251} } @misc{VQAv2, title={Making the V in VQA Matter: Elevating the Role of Image Understanding in Visual Question Answering}, author={Yash Goyal and Tejas Khot and Douglas Summers-Stay and Dhruv Batra and Devi Parikh}, year={2017}, eprint={1612.00837}, archivePrefix={arXiv}, primaryClass={cs.CV} } @misc{VSR, title={Visual Spatial Reasoning}, author={Fangyu Liu and Guy Emerson and Nigel Collier}, year={2023}, eprint={2205.00363}, archivePrefix={arXiv}, primaryClass={cs.CL} } @misc{WebSight, title={Unlocking the conversion of Web Screenshots into HTML Code with the WebSight Dataset}, author={Hugo Laurençon and Léo Tronchon and Victor Sanh}, year={2024}, eprint={2403.09029}, archivePrefix={arXiv}, primaryClass={cs.HC} } </details> ## Licensing Information Each of the publicly available sub-datasets present in the Cauldron are governed by specific licensing conditions. Therefore, when making use of them you must take into consideration each of the licenses governing each dataset. To the extent we have any rights in the prompts, these are licensed under CC-BY-4.0. ## Citation Information If you are using this dataset, please cite ``` @misc{laurençon2024matters, title={What matters when building vision-language models?}, author={Hugo Laurençon and Léo Tronchon and Matthieu Cord and Victor Sanh}, year={2024}, eprint={2405.02246}, archivePrefix={arXiv}, primaryClass={cs.CV} } ```
mlfoundations/dclm-baseline-1.0
mlfoundations
"2024-07-22T15:27:52"
735,418
216
[ "license:cc-by-4.0", "arxiv:2406.11794", "region:us" ]
null
"2024-06-17T18:57:13"
--- license: cc-by-4.0 dataset_info: features: - name: bff_contained_ngram_count_before_dedupe dtype: int64 - name: language_id_whole_page_fasttext struct: - name: en dtype: float64 - name: metadata struct: - name: Content-Length dtype: string - name: Content-Type dtype: string - name: WARC-Block-Digest dtype: string - name: WARC-Concurrent-To dtype: string - name: WARC-Date dtype: timestamp[s] - name: WARC-IP-Address dtype: string - name: WARC-Identified-Payload-Type dtype: string - name: WARC-Payload-Digest dtype: string - name: WARC-Record-ID dtype: string - name: WARC-Target-URI dtype: string - name: WARC-Type dtype: string - name: WARC-Warcinfo-ID dtype: string - name: WARC-Truncated dtype: string - name: previous_word_count dtype: int64 - name: text dtype: string - name: url dtype: string - name: warcinfo dtype: string - name: fasttext_openhermes_reddit_eli5_vs_rw_v2_bigram_200k_train_prob dtype: float64 --- ## DCLM-baseline DCLM-baseline is a 4T token / 3B document pretraining dataset that achieves strong performance on language model benchmarks. Below are comparisions of model trained on DCLM-baseline with other models in the 7B regime. | Model | Params | Tokens | Open dataset? | CORE | MMLU | EXTENDED | |---------------|--------|--------|---------------|----------|----------|----------| | **Open weights, closed datasets** | | | | | | | | Llama2 | 7B | 2T | ✗ | 49.2 | 45.8 | 34.1 | | DeepSeek | 7B | 2T | ✗ | 50.7 | 48.5 | 35.3 | | Mistral-0.3 | 7B | ? | ✗ | 57.0 | 62.7 | 45.1 | | QWEN-2 | 7B | ? | ✗ | 57.5 | **71.9** | 50.5 | | Llama3 | 8B | 15T | ✗ | 57.6 | 66.2 | 46.3 | | Gemma | 8B | 6T | ✗ | 57.8 | 64.3 | 44.6 | | Phi-3 | 7B | ? | ✗ | **61.0** | 69.9 | **57.9** | | **Open weights, open datasets** | | | | | | | | Falcon | 7B | 1T | ✓ | 44.1 | 27.4 | 25.1 | | Amber | 7B | 1.2T | ✓ | 39.8 | 27.9 | 22.3 | | Crystal | 7B | 1.2T | ✓ | 48.0 | 48.2 | 33.2 | | OLMo-1.7 | 7B | 2.1T | ✓ | 47.0 | 54.0 | 34.2 | | MAP-Neo | 7B | 4.5T | ✓ | **50.2** | **57.1** | **40.4** | | **Models we trained** | | | | | | | | FineWeb edu | 7B | 0.14T | ✓ | 38.7 | 26.3 | 22.1 | | FineWeb edu | 7B | 0.28T | ✓ | 41.9 | 37.3 | 24.5 | | **DCLM-BASELINE** | 7B | 0.14T | ✓ | 44.1 | 38.3 | 25.0 | | **DCLM-BASELINE** | 7B | 0.28T | ✓ | 48.9 | 50.8 | 31.8 | | **DCLM-BASELINE** | 7B | 2.6T | ✓ | **57.1** | **63.7** | **45.4** | ## Dataset Details ### Dataset Description - **Curated by:** The DCLM Team - **Language(s) (NLP):** English - **License:** CC-by-4.0 ### Dataset Sources - **Repository:** https://datacomp.ai/dclm - **Paper:**: https://arxiv.org/abs/2406.11794 - **Construction Code**: https://github.com/mlfoundations/dclm ## Uses ### Direct Use DCLM-Baseline is intended to be used as a research baseline for the DCLM benchmark. It demonstrates the importance of data curation in training performant language models. ### Out-of-Scope Use DCLM-Baseline is not intended for training production-ready models or for specific domains such as code and math. It may not perform as well as domain-specific datasets for these tasks. Due to these limitations, the dataset is intended for research use only. DCLM-Baseline is a subset of the DCLM-Pool, which is a corpus of 240 trillion tokens derived from Common Crawl. The dataset is in plain text format. ## Dataset Creation ### Curation Rationale DCLM-Baseline was created to demonstrate the effectiveness of the DCLM testbed in developing high-quality training sets for language models. It serves as a proof of concept for the data curation strategies enabled by DCLM and is designed to be a research baseline for the benchmark. ### Source Data #### Data Collection and Processing DCLM-Baseline was created by applying a series of cleaning, filtering, and deduplication steps to the raw Common Crawl data (DCLM-Pool). The key steps include: 1. Heuristic cleaning and filtering (reproduction of RefinedWeb) 2. Deduplication using a Bloom filter 3. Model-based filtering using a fastText classifier trained on instruction-formatted data (OpenHermes 2.5 and r/ExplainLikeImFive) #### Who are the source data producers? The source data is from Common Crawl, which is a repository of web crawl data. ### Personal and Sensitive Information [More Information Needed] ## Bias, Risks, and Limitations The dataset may contain biases present in the Common Crawl data. The dataset's performance on code and math tasks is limited compared to its performance on language understanding tasks. DCLM-Baseline is designed for research purposes only. ### Recommendations Users should be aware of the potential biases and limitations of the dataset, especially when using it for specific domains like code and math. The dataset should only be used for research purposes in the context of the DCLM benchmark. ## Citation ```bibtex @misc{li2024datacomplm, title={DataComp-LM: In search of the next generation of training sets for language models}, author={Jeffrey Li and Alex Fang and Georgios Smyrnis and Maor Ivgi and Matt Jordan and Samir Gadre and Hritik Bansal and Etash Guha and Sedrick Keh and Kushal Arora and Saurabh Garg and Rui Xin and Niklas Muennighoff and Reinhard Heckel and Jean Mercat and Mayee Chen and Suchin Gururangan and Mitchell Wortsman and Alon Albalak and Yonatan Bitton and Marianna Nezhurina and Amro Abbas and Cheng-Yu Hsieh and Dhruba Ghosh and Josh Gardner and Maciej Kilian and Hanlin Zhang and Rulin Shao and Sarah Pratt and Sunny Sanyal and Gabriel Ilharco and Giannis Daras and Kalyani Marathe and Aaron Gokaslan and Jieyu Zhang and Khyathi Chandu and Thao Nguyen and Igor Vasiljevic and Sham Kakade and Shuran Song and Sujay Sanghavi and Fartash Faghri and Sewoong Oh and Luke Zettlemoyer and Kyle Lo and Alaaeldin El-Nouby and Hadi Pouransari and Alexander Toshev and Stephanie Wang and Dirk Groeneveld and Luca Soldaini and Pang Wei Koh and Jenia Jitsev and Thomas Kollar and Alexandros G. Dimakis and Yair Carmon and Achal Dave and Ludwig Schmidt and Vaishaal Shankar}, year={2024}, eprint={2406.11794}, archivePrefix={arXiv}, primaryClass={id='cs.LG' full_name='Machine Learning' is_active=True alt_name=None in_archive='cs' is_general=False description='Papers on all aspects of machine learning research (supervised, unsupervised, reinforcement learning, bandit problems, and so on) including also robustness, explanation, fairness, and methodology. cs.LG is also an appropriate primary category for applications of machine learning methods.'} ```
kdexd/red_caps
kdexd
"2024-01-18T11:14:38"
710,259
59
[ "task_categories:image-to-text", "task_ids:image-captioning", "annotations_creators:found", "language_creators:found", "multilinguality:monolingual", "source_datasets:original", "language:en", "license:cc-by-4.0", "size_categories:10M<n<100M", "arxiv:2111.11431", "region:us" ]
[ "image-to-text" ]
"2022-03-02T23:29:22"
--- annotations_creators: - found language_creators: - found language: - en license: - cc-by-4.0 multilinguality: - monolingual size_categories: - 10M<n<100M source_datasets: - original task_categories: - image-to-text task_ids: - image-captioning paperswithcode_id: redcaps pretty_name: RedCaps dataset_info: features: - name: image_id dtype: string - name: author dtype: string - name: image_url dtype: string - name: raw_caption dtype: string - name: caption dtype: string - name: subreddit dtype: class_label: names: '0': abandonedporn '1': abandoned '2': absoluteunits '3': airplants '4': alltheanimals '5': amateurphotography '6': amateurroomporn '7': animalporn '8': antiques '9': antkeeping '10': ants '11': aquariums '12': architectureporn '13': artefactporn '14': astronomy '15': astrophotography '16': australiancattledog '17': australianshepherd '18': autumnporn '19': averagebattlestations '20': awwducational '21': awwnverts '22': axolotls '23': backpacking '24': backyardchickens '25': baking '26': ballpython '27': barista '28': bassfishing '29': battlestations '30': bbq '31': beagle '32': beardeddragons '33': beekeeping '34': beerandpizza '35': beerporn '36': beerwithaview '37': beginnerwoodworking '38': bengalcats '39': bento '40': bernesemountaindogs '41': berries '42': bettafish '43': bicycling '44': bikecommuting '45': birding '46': birdphotography '47': birdpics '48': birdsofprey '49': birds '50': blackcats '51': blacksmith '52': bladesmith '53': boatporn '54': bonsai '55': bookporn '56': bookshelf '57': bordercollie '58': bostonterrier '59': botanicalporn '60': breadit '61': breakfastfood '62': breakfast '63': bridgeporn '64': brochet '65': budgetfood '66': budgies '67': bulldogs '68': burgers '69': butterflies '70': cabinporn '71': cactus '72': cakedecorating '73': cakewin '74': cameras '75': campingandhiking '76': camping '77': carnivorousplants '78': carpentry '79': carporn '80': cassetteculture '81': castiron '82': castles '83': casualknitting '84': catpictures '85': cats '86': ceramics '87': chameleons '88': charcuterie '89': cheesemaking '90': cheese '91': chefit '92': chefknives '93': chickens '94': chihuahua '95': chinchilla '96': chinesefood '97': churchporn '98': cider '99': cityporn '100': classiccars '101': cockatiel '102': cocktails '103': coffeestations '104': coins '105': cookiedecorating '106': corgi '107': cornsnakes '108': cozyplaces '109': crafts '110': crestedgecko '111': crochet '112': crossstitch '113': crows '114': crystals '115': cupcakes '116': dachshund '117': damnthatsinteresting '118': desertporn '119': designmyroom '120': desksetup '121': dessertporn '122': dessert '123': diy '124': dobermanpinscher '125': doggos '126': dogpictures '127': drunkencookery '128': duck '129': dumpsterdiving '130': earthporn '131': eatsandwiches '132': embroidery '133': entomology '134': equestrian '135': espresso '136': exposureporn '137': eyebleach '138': f1porn '139': farming '140': femalelivingspace '141': fermentation '142': ferrets '143': fireporn '144': fishing '145': fish '146': flowers '147': flyfishing '148': foodporn '149': food '150': foraging '151': fossilporn '152': fountainpens '153': foxes '154': frenchbulldogs '155': frogs '156': gardening '157': gardenwild '158': geckos '159': gemstones '160': geologyporn '161': germanshepherds '162': glutenfree '163': goldenretrievers '164': goldfish '165': gold '166': greatpyrenees '167': grilledcheese '168': grilling '169': guineapigs '170': gunporn '171': guns '172': hamsters '173': handtools '174': healthyfood '175': hedgehog '176': helicopters '177': herpetology '178': hiking '179': homestead '180': horses '181': hotpeppers '182': houseplants '183': houseporn '184': husky '185': icecreamery '186': indoorgarden '187': infrastructureporn '188': insects '189': instantpot '190': interestingasfuck '191': interiordesign '192': itookapicture '193': jellyfish '194': jewelry '195': kayakfishing '196': kayaking '197': ketorecipes '198': knifeporn '199': knives '200': labrador '201': leathercraft '202': leopardgeckos '203': lizards '204': lookatmydog '205': macarons '206': machineporn '207': macroporn '208': malelivingspace '209': mead '210': mealprepsunday '211': mechanicalkeyboards '212': mechanicalpencils '213': melts '214': metalworking '215': microgreens '216': microporn '217': mildlyinteresting '218': mineralporn '219': monitors '220': monstera '221': mostbeautiful '222': motorcycleporn '223': muglife '224': mushroomgrowers '225': mushroomporn '226': mushrooms '227': mycology '228': natureisfuckinglit '229': natureporn '230': nebelung '231': orchids '232': otters '233': outdoors '234': owls '235': parrots '236': pelletgrills '237': pens '238': perfectfit '239': permaculture '240': photocritique '241': photographs '242': pics '243': pitbulls '244': pizza '245': plantbaseddiet '246': plantedtank '247': plantsandpots '248': plants '249': pomeranians '250': pottery '251': pourpainting '252': proplifting '253': pugs '254': pug '255': quilting '256': rabbits '257': ramen '258': rarepuppers '259': reeftank '260': reptiles '261': resincasting '262': roomporn '263': roses '264': rottweiler '265': ruralporn '266': sailing '267': salsasnobs '268': samoyeds '269': savagegarden '270': scotch '271': seaporn '272': seriouseats '273': sewing '274': sharks '275': shiba '276': shihtzu '277': shrimptank '278': siamesecats '279': siberiancats '280': silverbugs '281': skyporn '282': sloths '283': smoking '284': snails '285': snakes '286': sneakers '287': sneks '288': somethingimade '289': soup '290': sourdough '291': sousvide '292': spaceporn '293': spicy '294': spiderbro '295': spiders '296': squirrels '297': steak '298': streetphotography '299': succulents '300': superbowl '301': supermodelcats '302': sushi '303': tacos '304': tarantulas '305': tastyfood '306': teaporn '307': tea '308': tequila '309': terrariums '310': thedepthsbelow '311': thriftstorehauls '312': tinyanimalsonfingers '313': tonightsdinner '314': toolporn '315': tools '316': torties '317': tortoise '318': tractors '319': trailrunning '320': trains '321': trucks '322': turtle '323': underwaterphotography '324': upcycling '325': urbanexploration '326': urbanhell '327': veganfoodporn '328': veganrecipes '329': vegetablegardening '330': vegetarian '331': villageporn '332': vintageaudio '333': vintage '334': vinyl '335': volumeeating '336': watches '337': waterporn '338': weatherporn '339': wewantplates '340': wildernessbackpacking '341': wildlifephotography '342': wine '343': winterporn '344': woodcarving '345': woodworking '346': workbenches '347': workspaces '348': yarnaddicts '349': zerowaste - name: score dtype: int32 - name: created_utc dtype: timestamp[s, tz=UTC] - name: permalink dtype: string - name: crosspost_parents sequence: string config_name: all splits: - name: train num_bytes: 3378544525 num_examples: 12011121 download_size: 1061908181 dataset_size: 3378544525 --- # Dataset Card for RedCaps ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Dataset Preprocessing](#dataset-preprocessing) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [RedCaps homepage](https://redcaps.xyz/) - **Repository:** [RedCaps repository](https://github.com/redcaps-dataset/redcaps-downloader) - **Paper:** [RedCaps: web-curated image-text data created by the people, for the people](https://arxiv.org/abs/2111.11431) - **Leaderboard:** - **Point of Contact:** [Karan Desai](mailto:[email protected]) ### Dataset Summary RedCaps is a large-scale dataset of 12M image-text pairs collected from Reddit. Images and captions from Reddit depict and describe a wide variety of objects and scenes. The data is collected from a manually curated set of subreddits (350 total), which give coarse image labels and allow steering of the dataset composition without labeling individual instances. RedCaps data is created *by the people, for the people* – it contains everyday things that users like to share on social media, for example hobbies (r/crafts) and pets (r/shiba). Captions often contain specific and fine-grained descriptions (northern cardinal, taj mahal). Subreddit names provide relevant image labels (r/shiba) even when captions may not (mlem!), and sometimes may group many visually unrelated images through a common semantic meaning (r/perfectfit). ### Dataset Preprocessing This dataset doesn't download the images locally by default. Instead, it exposes URLs to the images. To fetch the images, use the following code: ```python from concurrent.futures import ThreadPoolExecutor from functools import partial import io import urllib import PIL.Image from datasets import load_dataset from datasets.utils.file_utils import get_datasets_user_agent USER_AGENT = get_datasets_user_agent() def fetch_single_image(image_url, timeout=None, retries=0): for _ in range(retries + 1): try: request = urllib.request.Request( image_url, data=None, headers={"user-agent": USER_AGENT}, ) with urllib.request.urlopen(request, timeout=timeout) as req: image = PIL.Image.open(io.BytesIO(req.read())) break except Exception: image = None return image def fetch_images(batch, num_threads, timeout=None, retries=0): fetch_single_image_with_args = partial(fetch_single_image, timeout=timeout, retries=retries) with ThreadPoolExecutor(max_workers=num_threads) as executor: batch["image"] = list(executor.map(fetch_single_image_with_args, batch["image_url"])) return batch num_threads = 20 dset = load_dataset("red_caps", "rabbits_2017") dset = dset.map(fetch_images, batched=True, batch_size=100, fn_kwargs={"num_threads": num_threads}) ``` Some image links point to more than one image. You can process and downloaded those as follows: ```python from concurrent.futures import ThreadPoolExecutor from functools import partial import io import os import re import urllib import PIL.Image import datasets from datasets import load_dataset from datasets.utils.file_utils import get_datasets_user_agent USER_AGENT = get_datasets_user_agent() def fetch_single_image(image_url, timeout=None, retries=0): for _ in range(retries + 1): try: request = urllib.request.Request( image_url, data=None, headers={"user-agent": USER_AGENT}, ) with urllib.request.urlopen(request, timeout=timeout) as req: image = PIL.Image.open(io.BytesIO(req.read())) break except Exception: image = None return image def fetch_images(batch, num_threads, timeout=None, retries=0): fetch_single_image_with_args = partial(fetch_single_image, timeout=timeout, retries=retries) with ThreadPoolExecutor(max_workers=num_threads) as executor: batch["image"] = list(executor.map(lambda image_urls: [fetch_single_image_with_args(image_url) for image_url in image_urls], batch["image_url"])) return batch def process_image_urls(batch): processed_batch_image_urls = [] for image_url in batch["image_url"]: processed_example_image_urls = [] image_url_splits = re.findall(r"http\S+", image_url) for image_url_split in image_url_splits: if "imgur" in image_url_split and "," in image_url_split: for image_url_part in image_url_split.split(","): if not image_url_part: continue image_url_part = image_url_part.strip() root, ext = os.path.splitext(image_url_part) if not root.startswith("http"): root = "http://i.imgur.com/" + root root = root.split("#")[0] if not ext: ext = ".jpg" ext = re.split(r"[?%]", ext)[0] image_url_part = root + ext processed_example_image_urls.append(image_url_part) else: processed_example_image_urls.append(image_url_split) processed_batch_image_urls.append(processed_example_image_urls) batch["image_url"] = processed_batch_image_urls return batch dset = load_dataset("red_caps", "rabbits_2017") dset = dset.map(process_image_urls, batched=True, num_proc=4) features = dset["train"].features.copy() features["image"] = datasets.Sequence(datasets.Image()) num_threads = 20 dset = dset.map(fetch_images, batched=True, batch_size=100, features=features, fn_kwargs={"num_threads": num_threads}) ``` Note that in the above code, we use the `datasets.Sequence` feature to represent a list of images for the multi-image links. ### Supported Tasks and Leaderboards From the paper: > We have used our dataset to train deep neural networks that perform image captioning, and that learn transferable visual representations for a variety of downstream visual recognition tasks (image classification, object detection, instance segmentation). > We anticipate that the dataset could be used for a variety of vision-and-language (V&L) tasks, such as image or text retrieval or text-to-image synthesis. ### Languages All of the subreddits in RedCaps use English as their primary language. ## Dataset Structure ### Data Instances Each instance in RedCaps represents a single Reddit image post: ``` { 'image_id': 'bpzj7r', 'author': 'djasz1', 'image_url': 'https://i.redd.it/ho0wntksivy21.jpg', 'raw_caption': 'Found on a friend’s property in the Keys FL. She is now happily living in my house.', 'caption': 'found on a friend's property in the keys fl. she is now happily living in my house.', 'subreddit': 3, 'score': 72, 'created_utc': datetime.datetime(2019, 5, 18, 1, 36, 41), 'permalink': '/r/airplants/comments/bpzj7r/found_on_a_friends_property_in_the_keys_fl_she_is/', 'crosspost_parents': None } ``` ### Data Fields - `image_id`: Unique alphanumeric ID of the image post (assigned by Reddit). - `author`: Reddit username of the image post author. - `image_url`: Static URL for downloading the image associated with the post. - `raw_caption`: Textual description of the image, written by the post author. - `caption`: Cleaned version of "raw_caption" by us (see Q35). - `subreddit`: Name of subreddit where the post was submitted. - `score`: Net upvotes (discounting downvotes) received by the image post. This field is equal to `None` if the image post is a crosspost. - `created_utc`: Integer time epoch (in UTC) when the post was submitted to Reddit. - `permalink`: Partial URL of the Reddit post (https://reddit.com/<permalink>). - `crosspost_parents`: List of parent posts. This field is optional. ### Data Splits All the data is contained in training set. The training set has nearly 12M (12,011,111) instances. From the paper: > We intend our dataset to be primarily used for pre-training with one or more specific downstream task(s) in mind. Hence, all instances in our dataset would be used for training while the validation split is derived from downstream task(s). If users require a validation split, we recommend sampling it such that it follows the same subreddit distribution as entire dataset. ## Dataset Creation ### Curation Rationale From the paper: > Large datasets of image-text pairs are widely used for pre-training generic representations that transfer to a variety of downstream vision and vision-and-language tasks. Existing public datasets of this kind were curated from search engine results (SBU Captions [1]) or HTML alt-text from arbitrary web pages (Conceptual Captions [2, 31]). They performed complex data filtering to deal with noisy web data. Due to aggressive filtering, their data collection is inefficient and diversity is artificially supressed. We argue that the quality of data depends on its source, and the human intent behind its creation. In this work, we explore Reddit – a social media platform, for curating high quality data. We introduce RedCaps – a large dataset of 12M image-text pairs from Reddit. While we expect the use-cases of RedCaps to be similar to existing datasets, we discuss how Reddit as a data source leads to fast and lightweight collection, better data quality, lets us easily steer the data distribution, and facilitates ethically responsible data curation. ### Source Data #### Initial Data Collection and Normalization From the paper: > **Data Collection Pipeline** Reddit’s uniform structure allows us to parallelize data collection as independent tasks – each task involves collecting posts submitted to a single subreddit in one year. Our collection pipeline has three steps: (1) subreddit selection, (2) image post filtering, and (3) caption cleaning. **Step 1**. Subreddit selection: We collect data from a manually curated set of subreddits. Subreddits have their own rules, community norms, and moderators so curating subreddits allows us to steer the dataset’s composition without annotating individual instances. We select subreddits with a high volume of images posts, where images tend to be photographs (rather than memes, drawings, screenshots, etc) and post titles tend to describe image content (rather than making jokes, political commentary, etc). We do not select any NSFW, banned, or quarantined subreddits. We want to minimize the number of people that appear in RedCaps, so we omit subreddits whose primary purpose is to share or comment on images of people (such as celebrity pics or user selfies). We choose subreddits focused on general photography (r/pics, r/itookapicture), animals (r/axolotls, r/birdsofprey, r/dachshund), plants (r/roses, r/succulents), objects (r/classiccars, r/trains, r/mechanicalkeyboards), food (r/steak, r/macarons), scenery (r/cityporn1 , r/desertporn), or activities (r/carpentry, r/kayaking). In total we collect data from 350 subreddits; the full list can be found in Appendix A. **Step 2**. Image post filtering: We use Pushshift [41] and Reddit [42, 43] APIs to download all image posts submitted to our selected subreddits from 2008–2020. Posts are collected at least six months after their creation to let upvotes stabilize. We only collect posts with images hosted on three domains: Reddit (i.redd.it), Imgur (i.imgur.com), and Flickr (staticflickr.com). Some image posts contain multiple images (gallery posts) – in this case we only collect the first image and associate it with the caption. We discard posts with < 2 upvotes to avoid unappealing content, and we discard posts marked NSFW (by their authors or subreddit moderators) to avoid pornographic or disturbing content. **Step 3**. Caption cleaning: We expect Reddit post titles to be less noisy than other large-scale sources of image captions such as alt-text [2, 31], so we apply minimal text cleaning. We lowercase captions and use ftfy [44] to remove character accents, emojis, and non-latin characters, following [29, 35, 36]. Then we apply simple pattern matching to discard all sub-strings enclosed in brackets ((.*), [.*]). These sub-strings usually give non-semantic information: original content tags [oc], image resolutions (800x600 px), camera specs (shot with iPhone), self-promotion [Instagram: @user], and other references (link in comments). Finally, like [31] we replace social media handles (words starting with ‘@’) with a [USR] token to protect user privacy and reduce redundancy. Due to such filtering, ≈12K (0.1%) captions in our dataset are empty strings. We do not discard them, as subreddit names alone provide meaningful supervision. Unlike CC-3M or CC-12M that discard captions without nouns or that don’t overlap image tags, we do not discard any instances in this step. Through this pipeline, we collect 13.4M instances from 350 subreddits. Our collection pipeline is less resource-intensive than existing datasets – we do not require webpage crawlers, search engines, or large databases of indexed webpages. RedCaps is easily extensible in the future by selecting more subreddits and collecting posts from future years. Next, we perform additional filtering to mitigate user privacy risks and harmful stereotypes in RedCaps, resulting in final size of 12M instances. #### Who are the source language producers? Reddit is the singular data source for RedCaps. ### Annotations #### Annotation process The dataset is built using fully automatic data collection pipeline which doesn't require any human annotators. #### Who are the annotators? The annotation process doesn't require any human annotators. ### Personal and Sensitive Information From the paper: > **Does the dataset relate to people?** The dataset pertains to people in that people wrote the captions and posted images to Reddit that we curate in RedCaps. We made specific design choices while curating RedCaps to avoid large quantities of images containing people: (a) We collect data from manually curated subreddits in which most contain primarily pertains to animals, objects, places, or activities. We exclude all subreddits whose primary purpose is to share and describe images of people (such as celebrity photos or user selfies). (b) We use an off-the-shelf face detector to find and remove images with potential presence of human faces. We manually checked 50K random images in RedCaps (Q16) and found 79 images with identifiable human faces – the entire dataset may have ≈19K (0.15%) images with identifiable people. Refer Section 2.2 in the main paper. > **Is it possible to identify one or more natural persons, either directly or indirectly (i.e., in combination with other data) from the dataset?** Yes, all instances in RedCaps include Reddit usernames of their post authors. This could be used to look up the Reddit user profile, and some Reddit users may have identifying information in their profiles. Some images may contain human faces which could be identified by appearance. However, note that all this information is already public on Reddit, and searching it in RedCaps is no easier than searching directly on Reddit. > **Were the individuals in question notified about the data collection?** No. Reddit users are anonymous by default, and are not required to share their personal contact information (email, phone numbers, etc.). Hence, the only way to notify the authors of RedCaps image posts is by sending them private messages on Reddit. This is practically difficult to do manually, and will be classified as spam and blocked by Reddit if attempted to programmatically send a templated message to millions of users. > **Did the individuals in question consent to the collection and use of their data?** Users did not explicitly consent to the use of their data in our dataset. However, by uploading their data on Reddit, they consent that it would appear on the Reddit plaform and will be accessible via the official Reddit API (which we use to collect RedCaps). > **If consent was obtained, were the consenting individuals provided with a mechanism to revoke their consent in the future or for certain uses?** Users have full control over the presence of their data in our dataset. If users wish to revoke their consent, they can delete the underlying Reddit post – it will be automatically removed dfrom RedCaps since we distributed images as URLs. Moreover, we provide an opt-out request form on our dataset website for anybody to request removal of an individual instance if it is potentially harmful (e.g. NSFW, violates privacy, harmful stereotypes, etc.). ## Considerations for Using the Data ### Social Impact of Dataset From the paper: > **Has an analysis of the potential impact of the dataset and its use on data subjects (e.g., a data protection impact analysis) been conducted?** No. ### Discussion of Biases From the paper: > **Harmful Stereotypes**: Another concern with Reddit data is that images or language may represent harmful stereotypes about gender, race, or other characteristics of people [48, 49, 51]. We select only non-NSFW subreddits with active moderation for collecting data. This stands in contrast to less curated uses of Reddit data, such as GPT-2 [35] whose training data includes at least 63K documents from banned or quarantined subreddits which may contain toxic language [53]. We attempt to further reduce harmful stereotypes in two ways: > * **NSFW images**: We use the InceptionV3 [54] model from [55] to filter images detected as porn or hentai with confidence ≥ 0.9. Similar to face filtering, we estimated precision of our filtering and estimated amount of missed detections, shown in Table 1. The model detects 87K images with low precision (∼1%) – most detections are non-NSFW images with pink and beige hues. > * **Potentially derogatory language**: We filter instances whose captions contain words or phrases from a common blocklist [56]. It is important to note that such coarse filtering might suppress language from marginalized groups reclaiming slurs [51]; however, as RedCaps is not intended to describe people, we believe this is a pragmatic tradeoff to avoid propagating harmful labels. > **Reddit demographics**: Reddit’s user demographics are not representative of the population at large. Compared to US adults, Reddit users skew male (69% vs 49%), young (58% 18-29 years old vs 22%), college educated (36% vs 28%), and politically liberal (41% vs 25%) [57]. Reddit users are predominantly white (63%) [57], and 49% of desktop traffic to Reddit comes from the United States [58]. All of the subreddits in RedCaps use English as their primary language. Taken together, these demographic biases likely also bias the types of objects and places that appear in images on Reddit, and the language used to describe these images. We do not offer explicit countermeasures to these biases, but users of RedCaps should keep in mind that size doesn’t guarantee diversity [51]. Subtler issues may also exist, such as imbalanced representation of demographic groups [59] or gender bias in object co-occurrence [60] or language [61]. These are hard to control in internet data, so we release RedCaps with explicit instructions on suitable use-cases; specifically requesting models not be trained to identify people, or make decisions that impact people. We document these instructions and other terms-of-use in a datasheet [45], provided in Appendix G. > **Does the dataset contain data that, if viewed directly, might be offensive, insulting, threatening, or might otherwise cause anxiety?** The scale of RedCaps means that we are unable to verify the contents of all images and captions. However we have tried to minimize the possibility that RedCaps contains data that might be offensive, insulting, threatening, or might cause anxiety via the following mitigations: (a) We manually curate the set of subreddits from which to collect data; we only chose subreddits that are not marked NSFW and which generally contain non-offensive content. (b) Within our curated subreddits, we did not include any posts marked NSFW. (c) We removed all instances whose captions contained any of the 400 potentially offensive words or phrases. Refer Section 2.2 in the main paper. (d) We remove all instances whose images were flagged NSFW by an off-the-shelf detector. We manually checked 50K random images in RedCaps and found one image containing nudity (exposed buttocks; no identifiable face). Refer Section 2.2 in the main paper > **Does the dataset identify any subpopulations (e.g., by age, gender)?** RedCaps does not explicitly identify any subpopulations. Since some images contain people and captions are free-form natural language written by Reddit users, it is possible that some captions may identify people appearing in individual images as part of a subpopulation. > **Were any ethical review processes conducted (e.g., by an institutional review board)?** We did not conduct a formal ethical review process via institutional review boards. However, as described in Section 2.2 of the main paper and Q16 we employed several filtering mechanisms to try and remove instances that could be problematic. ### Other Known Limitations From the paper: > **Are there any errors, sources of noise, or redundancies in the dataset?** RedCaps is noisy by design since image-text pairs on the internet are noisy and unstructured. Some instances may also have duplicate images and captions – Reddit users may have shared the same image post in multiple subreddits. Such redundancies constitute a very small fraction of the dataset, and should have almost no effect in training large-scale models. > **Does the dataset contain data that might be considered confidential (e.g., data that is protected by legal privilege or by doctor-patient confidentiality, data that includes the content of individuals non-public communications)?** No, the subreddits included in RedCaps do not cover topics that may be considered confidential. All posts were publicly shared on Reddit prior to inclusion in RedCaps. ## Additional Information ### Dataset Curators From the paper: > Four researchers at the University of Michigan (affiliated as of 2021) have created RedCaps: Karan Desai, Gaurav Kaul, Zubin Aysola, and Justin Johnson. ### Licensing Information The image metadata is licensed under CC-BY 4.0 license. Additionally, uses of this dataset are subject to Reddit API terms (https://www.reddit.com/wiki/ api-terms) and users must comply with Reddit User Agreeement, Content Policy, and Privacy Policy – all accessible at https://www.redditinc.com/policies. From the paper: > RedCaps should only be used for non-commercial research. RedCaps should not be used for any tasks that involve identifying features related to people (facial recognition, gender, age, ethnicity identification, etc.) or make decisions that impact people (mortgages, job applications, criminal sentences; or moderation decisions about user-uploaded data that could result in bans from a website). Any commercial and for-profit uses of RedCaps are restricted – it should not be used to train models that will be deployed in production systems as part of a product offered by businesses or government agencies. ### Citation Information ```bibtex @misc{desai2021redcaps, title={RedCaps: web-curated image-text data created by the people, for the people}, author={Karan Desai and Gaurav Kaul and Zubin Aysola and Justin Johnson}, year={2021}, eprint={2111.11431}, archivePrefix={arXiv}, primaryClass={cs.CV} } ``` ### Contributions Thanks to [@mariosasko](https://github.com/mariosasko) for adding this dataset.
jat-project/jat-dataset
jat-project
"2024-02-16T13:52:52"
698,425
40
["task_categories:reinforcement-learning","task_categories:text-generation","task_categories:questio(...TRUNCATED)
[ "reinforcement-learning", "text-generation", "question-answering" ]
"2023-08-29T09:03:24"
"---\nannotations_creators:\n- found\n- machine-generated\nlicense: apache-2.0\nsource_datasets:\n- (...TRUNCATED)
Salesforce/wikitext
Salesforce
"2024-01-04T16:49:18"
617,298
431
["task_categories:text-generation","task_categories:fill-mask","task_ids:language-modeling","task_id(...TRUNCATED)
[ "text-generation", "fill-mask" ]
"2022-03-02T23:29:22"
"---\nannotations_creators:\n- no-annotation\nlanguage_creators:\n- crowdsourced\nlanguage:\n- en\nl(...TRUNCATED)
End of preview. Expand in Data Studio

Dataset Card for Hugging Face Hub Dataset Cards

This datasets consists of dataset cards for models hosted on the Hugging Face Hub. The dataset cards are created by the community and provide information about datasets hosted on the Hugging Face Hub. This dataset is updated on a daily basis and includes publicly available datasets on the Hugging Face Hub.

This dataset is made available to help support users wanting to work with a large number of Dataset Cards from the Hub. We hope that this dataset will help support research in the area of Dataset Cards and their use but the format of this dataset may not be useful for all use cases. If there are other features that you would like to see included in this dataset, please open a new discussion.

Dataset Details

Uses

There are a number of potential uses for this dataset including:

  • text mining to find common themes in dataset cards
  • analysis of the dataset card format/content
  • topic modelling of dataset cards
  • training language models on the dataset cards

Out-of-Scope Use

[More Information Needed]

Dataset Structure

This dataset has a single split.

Dataset Creation

Curation Rationale

The dataset was created to assist people in working with dataset cards. In particular it was created to support research in the area of dataset cards and their use. It is possible to use the Hugging Face Hub API or client library to download dataset cards and this option may be preferable if you have a very specific use case or require a different format.

Source Data

The source data is README.md files for datasets hosted on the Hugging Face Hub. We do not include any other supplementary files that may be included in the dataset directory.

Data Collection and Processing

The data is downloaded using a CRON job on a daily basis.

Who are the source data producers?

The source data producers are the creators of the dataset cards on the Hugging Face Hub. This includes a broad variety of people from the community ranging from large companies to individual researchers. We do not gather any information about who created the dataset card in this repository although this information can be gathered from the Hugging Face Hub API.

Annotations [optional]

There are no additional annotations in this dataset beyond the dataset card content.

Annotation process

N/A

Who are the annotators?

N/A

Personal and Sensitive Information

We make no effort to anonymize the data. Whilst we don't expect the majority of dataset cards to contain personal or sensitive information, it is possible that some dataset cards may contain this information. Dataset cards may also link to websites or email addresses.

Bias, Risks, and Limitations

Dataset cards are created by the community and we do not have any control over the content of the dataset cards. We do not review the content of the dataset cards and we do not make any claims about the accuracy of the information in the dataset cards. Some dataset cards will themselves discuss bias and sometimes this is done by providing examples of bias in either the training data or the responses provided by the dataset. As a result this dataset may contain examples of bias.

Whilst we do not directly download any images linked to in the dataset cards, some dataset cards may include images. Some of these images may not be suitable for all audiences.

Recommendations

Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.

Citation

No formal citation is required for this dataset but if you use this dataset in your work, please include a link to this dataset page.

Dataset Card Authors

@davanstrien

Dataset Card Contact

@davanstrien

Downloads last month
321

Space using librarian-bots/dataset_cards_with_metadata 1

Collection including librarian-bots/dataset_cards_with_metadata