Datasets:
The viewer is disabled because this dataset repo requires arbitrary Python code execution. Please consider
removing the
loading script
and relying on
automated data support
(you can use
convert_to_parquet
from the datasets
library). If this is not possible, please
open a discussion
for direct help.
MixBench: A Benchmark for Mixed Modality Retrieval
MixBench is a benchmark for evaluating retrieval across text, images, and multimodal documents. It is designed to test how well retrieval models handle queries and documents that span different modalities, such as pure text, pure images, and combined image+text inputs.
MixBench includes four subsets, each curated from a different data source:
- MSCOCO
- Google_WIT
- VisualNews
- OVEN
Each subset contains:
queries.jsonl
: each entry contains aquery_id
,text
, and/orimage
mixed_corpus.jsonl
: each entry contains acorpus_id
, atext
or animage
or a multimodal document (text
andimage
)qrels.tsv
: a tab-separated list of relevant query-document pairs (query_id
,corpus_id
,score=1
)corpus.jsonl
: the original corpus
This benchmark supports diverse retrieval settings including unimodal-to-multimodal and cross-modal search.
π Load Example
You can load a specific subset of MixBench using the name
argument:
from datasets import load_dataset
# Load the MSCOCO subset
ds_query = load_dataset("mixed-modality-search/MixBench25", name="MSCOCO", split='query')
ds_corpus = load_dataset("mixed-modality-search/MixBench25", name="MSCOCO", split='mixed_corpus')
ds_query = load_dataset("mixed-modality-search/MixBench25", name="MSCOCO", split='qrel')
# Load other subsets (corpus)
ds_gwit = load_dataset("mixed-modality-search/MixBench25", name="Google_WIT", split='mixed_corpus')
ds_news = load_dataset("mixed-modality-search/MixBench25", name="VisualNews",split='mixed_corpus')
ds_oven = load_dataset("mixed-modality-search/MixBench25", name="OVEN", split='mixed_corpus')
- Downloads last month
- 472