Datasets:
mteb
/

Modalities:
Text
Formats:
json
Languages:
English
ArXiv:
Libraries:
Datasets
pandas
License:
Dataset Viewer
Auto-converted to Parquet
query-id
stringlengths
5
8
corpus-id
stringlengths
4
9
score
float64
1
1
test0
doc0
1
test0
doc1
1
test1
doc6
1
test2
doc10
1
test3
doc17
1
test3
doc18
1
test4
doc42
1
test5
doc50
1
test6
doc59
1
test6
doc63
1
test7
doc67
1
test8
doc86
1
test9
doc91
1
test10
doc118
1
test11
doc136
1
test12
doc153
1
test13
doc172
1
test14
doc293
1
test15
doc302
1
test16
doc305
1
test17
doc449
1
test17
doc450
1
test18
doc514
1
test19
doc565
1
test19
doc579
1
test20
doc618
1
test21
doc635
1
test22
doc649
1
test23
doc653
1
test24
doc658
1
test25
doc698
1
test25
doc703
1
test26
doc724
1
test27
doc763
1
test28
doc787
1
test28
doc789
1
test29
doc807
1
test30
doc820
1
test30
doc824
1
test31
doc897
1
test32
doc908
1
test32
doc916
1
test33
doc921
1
test34
doc967
1
test35
doc972
1
test36
doc1010
1
test37
doc1016
1
test38
doc1026
1
test39
doc1042
1
test40
doc1070
1
test40
doc1071
1
test41
doc1100
1
test42
doc1118
1
test43
doc1154
1
test44
doc1164
1
test45
doc1187
1
test46
doc1193
1
test47
doc1215
1
test48
doc1229
1
test48
doc1239
1
test49
doc1260
1
test50
doc1404
1
test50
doc1405
1
test50
doc1407
1
test51
doc1420
1
test52
doc1432
1
test53
doc1448
1
test54
doc1468
1
test54
doc1474
1
test55
doc1486
1
test56
doc1490
1
test56
doc1541
1
test57
doc1580
1
test58
doc1599
1
test59
doc1617
1
test60
doc1631
1
test61
doc1679
1
test61
doc1684
1
test62
doc1729
1
test63
doc1744
1
test64
doc1754
1
test65
doc1771
1
test65
doc1774
1
test65
doc1782
1
test66
doc1824
1
test67
doc1927
1
test68
doc2000
1
test69
doc2030
1
test70
doc2107
1
test71
doc2127
1
test72
doc2134
1
test73
doc2151
1
test74
doc2254
1
test75
doc2262
1
test76
doc2274
1
test77
doc2319
1
test78
doc2339
1
test79
doc2368
1
test80
doc2404
1
test81
doc2479
1
End of preview. Expand in Data Studio

NQ

An MTEB dataset
Massive Text Embedding Benchmark

NFCorpus: A Full-Text Learning to Rank Dataset for Medical Information Retrieval

Task category t2t
Domains Written, Encyclopaedic
Reference https://ai.google.com/research/NaturalQuestions/

How to evaluate on this task

You can evaluate an embedding model on this dataset using the following code:

import mteb

task = mteb.get_tasks(["NQ"])
evaluator = mteb.MTEB(task)

model = mteb.get_model(YOUR_MODEL)
evaluator.run(model)

To learn more about how to run models on mteb task check out the GitHub repitory.

Citation

If you use this dataset, please cite the dataset as well as mteb, as this dataset likely includes additional processing as a part of the MMTEB Contribution.


@article{47761,
  author = {Tom Kwiatkowski and Jennimaria Palomaki and Olivia Redfield and Michael Collins and Ankur Parikh
and Chris Alberti and Danielle Epstein and Illia Polosukhin and Matthew Kelcey and Jacob Devlin and Kenton Lee
and Kristina N. Toutanova and Llion Jones and Ming-Wei Chang and Andrew Dai and Jakob Uszkoreit and Quoc Le
and Slav Petrov},
  journal = {Transactions of the Association of Computational
Linguistics},
  title = {Natural Questions: a Benchmark for Question Answering Research},
  year = {2019},
}


@article{enevoldsen2025mmtebmassivemultilingualtext,
  title={MMTEB: Massive Multilingual Text Embedding Benchmark},
  author={Kenneth Enevoldsen and Isaac Chung and Imene Kerboua and Márton Kardos and Ashwin Mathur and David Stap and Jay Gala and Wissam Siblini and Dominik Krzemiński and Genta Indra Winata and Saba Sturua and Saiteja Utpala and Mathieu Ciancone and Marion Schaeffer and Gabriel Sequeira and Diganta Misra and Shreeya Dhakal and Jonathan Rystrøm and Roman Solomatin and Ömer Çağatan and Akash Kundu and Martin Bernstorff and Shitao Xiao and Akshita Sukhlecha and Bhavish Pahwa and Rafał Poświata and Kranthi Kiran GV and Shawon Ashraf and Daniel Auras and Björn Plüster and Jan Philipp Harries and Loïc Magne and Isabelle Mohr and Mariya Hendriksen and Dawei Zhu and Hippolyte Gisserot-Boukhlef and Tom Aarsen and Jan Kostkan and Konrad Wojtasik and Taemin Lee and Marek Šuppa and Crystina Zhang and Roberta Rocca and Mohammed Hamdy and Andrianos Michail and John Yang and Manuel Faysse and Aleksei Vatolin and Nandan Thakur and Manan Dey and Dipam Vasani and Pranjal Chitale and Simone Tedeschi and Nguyen Tai and Artem Snegirev and Michael Günther and Mengzhou Xia and Weijia Shi and Xing Han Lù and Jordan Clive and Gayatri Krishnakumar and Anna Maksimova and Silvan Wehrli and Maria Tikhonova and Henil Panchal and Aleksandr Abramov and Malte Ostendorff and Zheng Liu and Simon Clematide and Lester James Miranda and Alena Fenogenova and Guangyu Song and Ruqiya Bin Safi and Wen-Ding Li and Alessia Borghini and Federico Cassano and Hongjin Su and Jimmy Lin and Howard Yen and Lasse Hansen and Sara Hooker and Chenghao Xiao and Vaibhav Adlakha and Orion Weller and Siva Reddy and Niklas Muennighoff},
  publisher = {arXiv},
  journal={arXiv preprint arXiv:2502.13595},
  year={2025},
  url={https://arxiv.org/abs/2502.13595},
  doi = {10.48550/arXiv.2502.13595},
}

@article{muennighoff2022mteb,
  author = {Muennighoff, Niklas and Tazi, Nouamane and Magne, Lo{\"\i}c and Reimers, Nils},
  title = {MTEB: Massive Text Embedding Benchmark},
  publisher = {arXiv},
  journal={arXiv preprint arXiv:2210.07316},
  year = {2022}
  url = {https://arxiv.org/abs/2210.07316},
  doi = {10.48550/ARXIV.2210.07316},
}

Dataset Statistics

Dataset Statistics

The following code contains the descriptive statistics from the task. These can also be obtained using:

import mteb

task = mteb.get_task("NQ")

desc_stats = task.metadata.descriptive_stats
{
    "test": {
        "num_samples": 2684920,
        "number_of_characters": 1322743518,
        "num_documents": 2681468,
        "min_document_length": 5,
        "average_document_length": 493.2287851281462,
        "max_document_length": 17008,
        "unique_documents": 2681468,
        "num_queries": 3452,
        "min_query_length": 25,
        "average_query_length": 48.17902665121669,
        "max_query_length": 100,
        "unique_queries": 3452,
        "none_queries": 0,
        "num_relevant_docs": 4201,
        "min_relevant_docs_per_query": 1,
        "average_relevant_docs_per_query": 1.2169756662804172,
        "max_relevant_docs_per_query": 4,
        "unique_relevant_docs": 4201,
        "num_instructions": null,
        "min_instruction_length": null,
        "average_instruction_length": null,
        "max_instruction_length": null,
        "unique_instructions": null,
        "num_top_ranked": null,
        "min_top_ranked_per_query": null,
        "average_top_ranked_per_query": null,
        "max_top_ranked_per_query": null
    }
}

This dataset card was automatically generated using MTEB

Downloads last month
517