Datasets:
mteb
/

Modalities:
Text
Formats:
json
Languages:
English
ArXiv:
Libraries:
Datasets
pandas
License:
Dataset Viewer
Auto-converted to Parquet
query-id
stringlengths
1
7
corpus-id
stringlengths
1
7
score
float64
1
1
1185869
0
1
1185868
16
1
597651
49
1
403613
60
1
1183785
389
1
312651
616
1
80385
723
1
645590
944
1
645337
1054
1
186154
1160
1
457407
1172
1
441383
1389
1
683408
1605
1
1164799
1713
1
484187
1822
1
460668
1939
1
666321
2152
1
182487
2277
1
564233
2488
1
455279
2599
1
208108
2704
1
733739
2816
1
1164798
2924
1
402608
3036
1
443797
3146
1
662502
3257
1
1184679
3368
1
14562
3382
1
602162
3597
1
545059
3702
1
708236
3815
1
310130
3923
1
693161
4029
1
186617
4251
1
573027
4360
1
1173772
4462
1
541973
4583
1
273090
4698
1
441269
4809
1
642237
4918
1
503515
5025
1
637443
5250
1
1164796
5359
1
749988
5469
1
749988
5470
1
135841
5585
1
295446
6021
1
653051
6127
1
691147
6236
1
410621
6458
1
410621
6461
1
1164795
6564
1
598443
6673
1
596451
6685
1
651441
6793
1
452286
7012
1
308543
7115
1
202126
7223
1
114820
7334
1
501778
7445
1
531029
7553
1
651110
7662
1
594127
7766
1
1164794
7777
1
396032
7885
1
705580
8103
1
658203
8217
1
387734
8328
1
655102
8439
1
224712
8652
1
411732
8764
1
1164793
8991
1
605902
9323
1
581014
9432
1
559240
9533
1
608711
9755
1
535936
9972
1
130335
9990
1
147535
10087
1
1164792
10204
1
595576
10417
1
569308
10747
1
753706
10854
1
627871
10964
1
673608
11081
1
510071
11102
1
113839
11321
1
1164791
11431
1
460953
11640
1
685235
12191
1
650643
12528
1
1183784
12640
1
1164790
12753
1
96740
12858
1
26666
12967
1
490046
13184
1
485823
13300
1
635632
13310
1
534505
13630
1
498612
13852
1
End of preview. Expand in Data Studio

MSMARCO

An MTEB dataset
Massive Text Embedding Benchmark

MS MARCO is a collection of datasets focused on deep learning in search

Task category t2t
Domains Encyclopaedic, Academic, Blog, News, Medical, Government, Reviews, Non-fiction, Social, Web
Reference https://microsoft.github.io/msmarco/

How to evaluate on this task

You can evaluate an embedding model on this dataset using the following code:

import mteb

task = mteb.get_tasks(["MSMARCO"])
evaluator = mteb.MTEB(task)

model = mteb.get_model(YOUR_MODEL)
evaluator.run(model)

To learn more about how to run models on mteb task check out the GitHub repitory.

Citation

If you use this dataset, please cite the dataset as well as mteb, as this dataset likely includes additional processing as a part of the MMTEB Contribution.


@article{DBLP:journals/corr/NguyenRSGTMD16,
  archiveprefix = {arXiv},
  author = {Tri Nguyen and
Mir Rosenberg and
Xia Song and
Jianfeng Gao and
Saurabh Tiwary and
Rangan Majumder and
Li Deng},
  bibsource = {dblp computer science bibliography, https://dblp.org},
  biburl = {https://dblp.org/rec/journals/corr/NguyenRSGTMD16.bib},
  eprint = {1611.09268},
  journal = {CoRR},
  timestamp = {Mon, 13 Aug 2018 16:49:03 +0200},
  title = {{MS} {MARCO:} {A} Human Generated MAchine Reading COmprehension Dataset},
  url = {http://arxiv.org/abs/1611.09268},
  volume = {abs/1611.09268},
  year = {2016},
}


@article{enevoldsen2025mmtebmassivemultilingualtext,
  title={MMTEB: Massive Multilingual Text Embedding Benchmark},
  author={Kenneth Enevoldsen and Isaac Chung and Imene Kerboua and Márton Kardos and Ashwin Mathur and David Stap and Jay Gala and Wissam Siblini and Dominik Krzemiński and Genta Indra Winata and Saba Sturua and Saiteja Utpala and Mathieu Ciancone and Marion Schaeffer and Gabriel Sequeira and Diganta Misra and Shreeya Dhakal and Jonathan Rystrøm and Roman Solomatin and Ömer Çağatan and Akash Kundu and Martin Bernstorff and Shitao Xiao and Akshita Sukhlecha and Bhavish Pahwa and Rafał Poświata and Kranthi Kiran GV and Shawon Ashraf and Daniel Auras and Björn Plüster and Jan Philipp Harries and Loïc Magne and Isabelle Mohr and Mariya Hendriksen and Dawei Zhu and Hippolyte Gisserot-Boukhlef and Tom Aarsen and Jan Kostkan and Konrad Wojtasik and Taemin Lee and Marek Šuppa and Crystina Zhang and Roberta Rocca and Mohammed Hamdy and Andrianos Michail and John Yang and Manuel Faysse and Aleksei Vatolin and Nandan Thakur and Manan Dey and Dipam Vasani and Pranjal Chitale and Simone Tedeschi and Nguyen Tai and Artem Snegirev and Michael Günther and Mengzhou Xia and Weijia Shi and Xing Han Lù and Jordan Clive and Gayatri Krishnakumar and Anna Maksimova and Silvan Wehrli and Maria Tikhonova and Henil Panchal and Aleksandr Abramov and Malte Ostendorff and Zheng Liu and Simon Clematide and Lester James Miranda and Alena Fenogenova and Guangyu Song and Ruqiya Bin Safi and Wen-Ding Li and Alessia Borghini and Federico Cassano and Hongjin Su and Jimmy Lin and Howard Yen and Lasse Hansen and Sara Hooker and Chenghao Xiao and Vaibhav Adlakha and Orion Weller and Siva Reddy and Niklas Muennighoff},
  publisher = {arXiv},
  journal={arXiv preprint arXiv:2502.13595},
  year={2025},
  url={https://arxiv.org/abs/2502.13595},
  doi = {10.48550/arXiv.2502.13595},
}

@article{muennighoff2022mteb,
  author = {Muennighoff, Niklas and Tazi, Nouamane and Magne, Lo{\"\i}c and Reimers, Nils},
  title = {MTEB: Massive Text Embedding Benchmark},
  publisher = {arXiv},
  journal={arXiv preprint arXiv:2210.07316},
  year = {2022}
  url = {https://arxiv.org/abs/2210.07316},
  doi = {10.48550/ARXIV.2210.07316},
}

Dataset Statistics

Dataset Statistics

The following code contains the descriptive statistics from the task. These can also be obtained using:

import mteb

task = mteb.get_task("MSMARCO")

desc_stats = task.metadata.descriptive_stats
{
    "train": {
        "num_samples": 9344762,
        "number_of_characters": 2994608051,
        "num_documents": 8841823,
        "min_document_length": 4,
        "average_document_length": 336.79716603691344,
        "max_document_length": 1670,
        "unique_documents": 8841823,
        "num_queries": 502939,
        "min_query_length": 5,
        "average_query_length": 33.21898281898998,
        "max_query_length": 215,
        "unique_queries": 502939,
        "none_queries": 0,
        "num_relevant_docs": 532751,
        "min_relevant_docs_per_query": 1,
        "average_relevant_docs_per_query": 1.0592755781516248,
        "max_relevant_docs_per_query": 7,
        "unique_relevant_docs": 516472,
        "num_instructions": null,
        "min_instruction_length": null,
        "average_instruction_length": null,
        "max_instruction_length": null,
        "unique_instructions": null,
        "num_top_ranked": null,
        "min_top_ranked_per_query": null,
        "average_top_ranked_per_query": null,
        "max_top_ranked_per_query": null
    },
    "dev": {
        "num_samples": 8848803,
        "number_of_characters": 2978133099,
        "num_documents": 8841823,
        "min_document_length": 4,
        "average_document_length": 336.79716603691344,
        "max_document_length": 1670,
        "unique_documents": 8841823,
        "num_queries": 6980,
        "min_query_length": 9,
        "average_query_length": 33.2621776504298,
        "max_query_length": 186,
        "unique_queries": 6980,
        "none_queries": 0,
        "num_relevant_docs": 7437,
        "min_relevant_docs_per_query": 1,
        "average_relevant_docs_per_query": 1.0654727793696275,
        "max_relevant_docs_per_query": 4,
        "unique_relevant_docs": 7433,
        "num_instructions": null,
        "min_instruction_length": null,
        "average_instruction_length": null,
        "max_instruction_length": null,
        "unique_instructions": null,
        "num_top_ranked": null,
        "min_top_ranked_per_query": null,
        "average_top_ranked_per_query": null,
        "max_top_ranked_per_query": null
    },
    "test": {
        "num_samples": 8841866,
        "number_of_characters": 2977902337,
        "num_documents": 8841823,
        "min_document_length": 4,
        "average_document_length": 336.79716603691344,
        "max_document_length": 1670,
        "unique_documents": 8841823,
        "num_queries": 43,
        "min_query_length": 16,
        "average_query_length": 32.74418604651163,
        "max_query_length": 55,
        "unique_queries": 43,
        "none_queries": 0,
        "num_relevant_docs": 9260,
        "min_relevant_docs_per_query": 132,
        "average_relevant_docs_per_query": 95.3953488372093,
        "max_relevant_docs_per_query": 582,
        "unique_relevant_docs": 9139,
        "num_instructions": null,
        "min_instruction_length": null,
        "average_instruction_length": null,
        "max_instruction_length": null,
        "unique_instructions": null,
        "num_top_ranked": null,
        "min_top_ranked_per_query": null,
        "average_top_ranked_per_query": null,
        "max_top_ranked_per_query": null
    }
}

This dataset card was automatically generated using MTEB

Downloads last month
723