Datasets:
mteb
/

Modalities:
Text
Formats:
json
Languages:
English
ArXiv:
Libraries:
Datasets
pandas
License:
Dataset Viewer
Auto-converted to Parquet
query-id
stringlengths
1
6
corpus-id
stringlengths
1
6
score
float64
1
1
19399
102236
1
19399
91901
1
19399
177507
1
19399
80798
1
19399
112990
1
19399
182056
1
19399
130867
1
19399
17821
1
19399
11810
1
19399
4501
1
19399
175150
1
19399
168653
1
19399
71259
1
19399
167405
1
19399
11476
1
19399
120630
1
19399
192720
1
19399
44459
1
19399
190304
1
19399
167240
1
19399
8545
1
19399
97772
1
19399
1338
1
19399
162608
1
19399
133105
1
19399
74680
1
19399
22082
1
19399
81484
1
19399
134461
1
19399
183003
1
19399
71266
1
19399
41583
1
19399
128973
1
19399
79668
1
19399
152106
1
19399
11589
1
19399
190191
1
19399
43474
1
19399
175596
1
19399
164832
1
19399
147712
1
19399
163800
1
19399
164657
1
19399
119668
1
19399
9053
1
19399
3288
1
19399
132622
1
19399
71154
1
19399
181598
1
19399
144405
1
19399
184525
1
19399
166773
1
19399
56449
1
19399
166173
1
19399
5588
1
19399
130948
1
19399
151807
1
19399
113234
1
19399
46609
1
19399
11023
1
19399
76371
1
19399
172448
1
19399
39838
1
19399
77835
1
19399
166397
1
5987
24519
1
57488
92203
1
97154
5913
1
97154
98854
1
97154
79081
1
97154
191853
1
97154
186539
1
97154
92267
1
97154
194528
1
97154
29552
1
97154
180660
1
97154
145369
1
97154
127072
1
97154
76161
1
97154
62257
1
97154
50686
1
97154
147063
1
97154
7841
1
97154
120709
1
97154
174143
1
97154
119530
1
97154
147905
1
97154
129431
1
97154
13093
1
97154
192402
1
97154
186518
1
97154
85125
1
97154
2964
1
97154
62692
1
97154
161894
1
97154
23421
1
97154
132833
1
97154
19847
1
97154
41889
1
97154
15609
1
End of preview. Expand in Data Studio

CQADupstackEnglishRetrieval

An MTEB dataset
Massive Text Embedding Benchmark

CQADupStack: A Benchmark Data Set for Community Question-Answering Research

Task category t2t
Domains Written
Reference http://nlp.cis.unimelb.edu.au/resources/cqadupstack/

How to evaluate on this task

You can evaluate an embedding model on this dataset using the following code:

import mteb

task = mteb.get_tasks(["CQADupstackEnglishRetrieval"])
evaluator = mteb.MTEB(task)

model = mteb.get_model(YOUR_MODEL)
evaluator.run(model)

To learn more about how to run models on mteb task check out the GitHub repitory.

Citation

If you use this dataset, please cite the dataset as well as mteb, as this dataset likely includes additional processing as a part of the MMTEB Contribution.


@inproceedings{hoogeveen2015,
  acmid = {2838934},
  address = {New York, NY, USA},
  articleno = {3},
  author = {Hoogeveen, Doris and Verspoor, Karin M. and Baldwin, Timothy},
  booktitle = {Proceedings of the 20th Australasian Document Computing Symposium (ADCS)},
  doi = {10.1145/2838931.2838934},
  isbn = {978-1-4503-4040-3},
  location = {Parramatta, NSW, Australia},
  numpages = {8},
  pages = {3:1--3:8},
  publisher = {ACM},
  series = {ADCS '15},
  title = {CQADupStack: A Benchmark Data Set for Community Question-Answering Research},
  url = {http://doi.acm.org/10.1145/2838931.2838934},
  year = {2015},
}


@article{enevoldsen2025mmtebmassivemultilingualtext,
  title={MMTEB: Massive Multilingual Text Embedding Benchmark},
  author={Kenneth Enevoldsen and Isaac Chung and Imene Kerboua and Márton Kardos and Ashwin Mathur and David Stap and Jay Gala and Wissam Siblini and Dominik Krzemiński and Genta Indra Winata and Saba Sturua and Saiteja Utpala and Mathieu Ciancone and Marion Schaeffer and Gabriel Sequeira and Diganta Misra and Shreeya Dhakal and Jonathan Rystrøm and Roman Solomatin and Ömer Çağatan and Akash Kundu and Martin Bernstorff and Shitao Xiao and Akshita Sukhlecha and Bhavish Pahwa and Rafał Poświata and Kranthi Kiran GV and Shawon Ashraf and Daniel Auras and Björn Plüster and Jan Philipp Harries and Loïc Magne and Isabelle Mohr and Mariya Hendriksen and Dawei Zhu and Hippolyte Gisserot-Boukhlef and Tom Aarsen and Jan Kostkan and Konrad Wojtasik and Taemin Lee and Marek Šuppa and Crystina Zhang and Roberta Rocca and Mohammed Hamdy and Andrianos Michail and John Yang and Manuel Faysse and Aleksei Vatolin and Nandan Thakur and Manan Dey and Dipam Vasani and Pranjal Chitale and Simone Tedeschi and Nguyen Tai and Artem Snegirev and Michael Günther and Mengzhou Xia and Weijia Shi and Xing Han Lù and Jordan Clive and Gayatri Krishnakumar and Anna Maksimova and Silvan Wehrli and Maria Tikhonova and Henil Panchal and Aleksandr Abramov and Malte Ostendorff and Zheng Liu and Simon Clematide and Lester James Miranda and Alena Fenogenova and Guangyu Song and Ruqiya Bin Safi and Wen-Ding Li and Alessia Borghini and Federico Cassano and Hongjin Su and Jimmy Lin and Howard Yen and Lasse Hansen and Sara Hooker and Chenghao Xiao and Vaibhav Adlakha and Orion Weller and Siva Reddy and Niklas Muennighoff},
  publisher = {arXiv},
  journal={arXiv preprint arXiv:2502.13595},
  year={2025},
  url={https://arxiv.org/abs/2502.13595},
  doi = {10.48550/arXiv.2502.13595},
}

@article{muennighoff2022mteb,
  author = {Muennighoff, Niklas and Tazi, Nouamane and Magne, Lo{\"\i}c and Reimers, Nils},
  title = {MTEB: Massive Text Embedding Benchmark},
  publisher = {arXiv},
  journal={arXiv preprint arXiv:2210.07316},
  year = {2022}
  url = {https://arxiv.org/abs/2210.07316},
  doi = {10.48550/ARXIV.2210.07316},
}

Dataset Statistics

Dataset Statistics

The following code contains the descriptive statistics from the task. These can also be obtained using:

import mteb

task = mteb.get_task("CQADupstackEnglishRetrieval")

desc_stats = task.metadata.descriptive_stats
{
    "test": {
        "num_samples": 41791,
        "number_of_characters": 19521569,
        "num_documents": 40221,
        "min_document_length": 41,
        "average_document_length": 483.4710971880361,
        "max_document_length": 6511,
        "unique_documents": 40221,
        "num_queries": 1570,
        "min_query_length": 15,
        "average_query_length": 48.32993630573248,
        "max_query_length": 149,
        "unique_queries": 1570,
        "none_queries": 0,
        "num_relevant_docs": 3765,
        "min_relevant_docs_per_query": 1,
        "average_relevant_docs_per_query": 2.3980891719745223,
        "max_relevant_docs_per_query": 79,
        "unique_relevant_docs": 3765,
        "num_instructions": null,
        "min_instruction_length": null,
        "average_instruction_length": null,
        "max_instruction_length": null,
        "unique_instructions": null,
        "num_top_ranked": null,
        "min_top_ranked_per_query": null,
        "average_top_ranked_per_query": null,
        "max_top_ranked_per_query": null
    }
}

This dataset card was automatically generated using MTEB

Downloads last month
345