Datasets:
mteb
/

Modalities:
Text
Formats:
json
Languages:
English
ArXiv:
Libraries:
Datasets
pandas
License:
Dataset Viewer
Auto-converted to Parquet
query-id
stringlengths
3
6
corpus-id
stringlengths
3
6
score
float64
1
1
318
317
1
378
377
1
379
29976
1
379
380
1
379
45646
1
379
45647
1
379
261886
1
399
58672
1
399
18738
1
399
364917
1
399
2606
1
399
2607
1
399
75211
1
399
102346
1
399
55149
1
399
276497
1
399
172795
1
399
22099
1
399
9754
1
399
95146
1
399
3956
1
399
50270
1
399
15261
1
399
3955
1
399
1779
1
399
55150
1
399
100513
1
399
21663
1
399
21662
1
399
44346
1
399
102345
1
399
400
1
399
1780
1
399
78756
1
399
53404
1
420
419
1
540
93765
1
540
541
1
548
549
1
609
608
1
609
26710
1
744
257345
1
744
33905
1
744
23949
1
744
17190
1
744
745
1
744
62075
1
744
54251
1
744
17191
1
744
82405
1
744
82404
1
784
785
1
784
44095
1
858
859
1
858
137433
1
975
974
1
1079
260818
1
1079
1078
1
1088
40625
1
1088
40626
1
1164
30996
1
1164
509713
1
1164
98062
1
1164
1165
1
1164
98063
1
1164
30997
1
1164
227207
1
1166
1167
1
1248
39021
1
1248
137904
1
1248
224085
1
1248
347497
1
1248
46840
1
1248
46839
1
1248
27060
1
1248
109328
1
1248
141532
1
1248
1249
1
1248
74599
1
1248
224657
1
1248
39022
1
1248
334904
1
1248
94479
1
1248
27059
1
1350
1349
1
1350
331457
1
1453
1452
1
1453
164807
1
1453
249576
1
1578
1577
1
1803
1804
1
1956
1955
1
1956
79023
1
1956
79022
1
1956
64346
1
1956
223958
1
1956
242083
1
1992
163321
1
1992
72594
1
1992
128942
1
End of preview. Expand in Data Studio

QuoraRetrieval

An MTEB dataset
Massive Text Embedding Benchmark

QuoraRetrieval is based on questions that are marked as duplicates on the Quora platform. Given a question, find other (duplicate) questions.

Task category t2t
Domains Written, Web, Blog
Reference https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs

How to evaluate on this task

You can evaluate an embedding model on this dataset using the following code:

import mteb

task = mteb.get_tasks(["QuoraRetrieval"])
evaluator = mteb.MTEB(task)

model = mteb.get_model(YOUR_MODEL)
evaluator.run(model)

To learn more about how to run models on mteb task check out the GitHub repitory.

Citation

If you use this dataset, please cite the dataset as well as mteb, as this dataset likely includes additional processing as a part of the MMTEB Contribution.


@misc{quora-question-pairs,
  author = {DataCanary, hilfialkaff, Lili Jiang, Meg Risdal, Nikhil Dandekar, tomtung},
  publisher = {Kaggle},
  title = {Quora Question Pairs},
  url = {https://kaggle.com/competitions/quora-question-pairs},
  year = {2017},
}


@article{enevoldsen2025mmtebmassivemultilingualtext,
  title={MMTEB: Massive Multilingual Text Embedding Benchmark},
  author={Kenneth Enevoldsen and Isaac Chung and Imene Kerboua and Márton Kardos and Ashwin Mathur and David Stap and Jay Gala and Wissam Siblini and Dominik Krzemiński and Genta Indra Winata and Saba Sturua and Saiteja Utpala and Mathieu Ciancone and Marion Schaeffer and Gabriel Sequeira and Diganta Misra and Shreeya Dhakal and Jonathan Rystrøm and Roman Solomatin and Ömer Çağatan and Akash Kundu and Martin Bernstorff and Shitao Xiao and Akshita Sukhlecha and Bhavish Pahwa and Rafał Poświata and Kranthi Kiran GV and Shawon Ashraf and Daniel Auras and Björn Plüster and Jan Philipp Harries and Loïc Magne and Isabelle Mohr and Mariya Hendriksen and Dawei Zhu and Hippolyte Gisserot-Boukhlef and Tom Aarsen and Jan Kostkan and Konrad Wojtasik and Taemin Lee and Marek Šuppa and Crystina Zhang and Roberta Rocca and Mohammed Hamdy and Andrianos Michail and John Yang and Manuel Faysse and Aleksei Vatolin and Nandan Thakur and Manan Dey and Dipam Vasani and Pranjal Chitale and Simone Tedeschi and Nguyen Tai and Artem Snegirev and Michael Günther and Mengzhou Xia and Weijia Shi and Xing Han Lù and Jordan Clive and Gayatri Krishnakumar and Anna Maksimova and Silvan Wehrli and Maria Tikhonova and Henil Panchal and Aleksandr Abramov and Malte Ostendorff and Zheng Liu and Simon Clematide and Lester James Miranda and Alena Fenogenova and Guangyu Song and Ruqiya Bin Safi and Wen-Ding Li and Alessia Borghini and Federico Cassano and Hongjin Su and Jimmy Lin and Howard Yen and Lasse Hansen and Sara Hooker and Chenghao Xiao and Vaibhav Adlakha and Orion Weller and Siva Reddy and Niklas Muennighoff},
  publisher = {arXiv},
  journal={arXiv preprint arXiv:2502.13595},
  year={2025},
  url={https://arxiv.org/abs/2502.13595},
  doi = {10.48550/arXiv.2502.13595},
}

@article{muennighoff2022mteb,
  author = {Muennighoff, Niklas and Tazi, Nouamane and Magne, Lo{\"\i}c and Reimers, Nils},
  title = {MTEB: Massive Text Embedding Benchmark},
  publisher = {arXiv},
  journal={arXiv preprint arXiv:2210.07316},
  year = {2022}
  url = {https://arxiv.org/abs/2210.07316},
  doi = {10.48550/ARXIV.2210.07316},
}

Dataset Statistics

Dataset Statistics

The following code contains the descriptive statistics from the task. These can also be obtained using:

import mteb

task = mteb.get_task("QuoraRetrieval")

desc_stats = task.metadata.descriptive_stats
{
    "dev": {
        "num_samples": 527931,
        "number_of_characters": 33285028,
        "num_documents": 522931,
        "min_document_length": 2,
        "average_document_length": 63.158154708747425,
        "max_document_length": 1170,
        "unique_documents": 522931,
        "num_queries": 5000,
        "min_query_length": 12,
        "average_query_length": 51.5342,
        "max_query_length": 268,
        "unique_queries": 5000,
        "none_queries": 0,
        "num_relevant_docs": 7626,
        "min_relevant_docs_per_query": 1,
        "average_relevant_docs_per_query": 1.5252,
        "max_relevant_docs_per_query": 84,
        "unique_relevant_docs": 7626,
        "num_instructions": null,
        "min_instruction_length": null,
        "average_instruction_length": null,
        "max_instruction_length": null,
        "unique_instructions": null,
        "num_top_ranked": null,
        "min_top_ranked_per_query": null,
        "average_top_ranked_per_query": null,
        "max_top_ranked_per_query": null
    },
    "test": {
        "num_samples": 532931,
        "number_of_characters": 33542753,
        "num_documents": 522931,
        "min_document_length": 2,
        "average_document_length": 63.158154708747425,
        "max_document_length": 1170,
        "unique_documents": 522931,
        "num_queries": 10000,
        "min_query_length": 2,
        "average_query_length": 51.5396,
        "max_query_length": 258,
        "unique_queries": 10000,
        "none_queries": 0,
        "num_relevant_docs": 15675,
        "min_relevant_docs_per_query": 1,
        "average_relevant_docs_per_query": 1.5675,
        "max_relevant_docs_per_query": 75,
        "unique_relevant_docs": 15675,
        "num_instructions": null,
        "min_instruction_length": null,
        "average_instruction_length": null,
        "max_instruction_length": null,
        "unique_instructions": null,
        "num_top_ranked": null,
        "min_top_ranked_per_query": null,
        "average_top_ranked_per_query": null,
        "max_top_ranked_per_query": null
    }
}

This dataset card was automatically generated using MTEB

Downloads last month
584

Models trained or fine-tuned on mteb/quora