Datasets:
metadata
annotations_creators: []
language_creators: []
language:
- de
- en
- fr
- ru
- zh
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
- multilingual
pretty_name: MTEB Benchmark
configs:
- config_name: default
data_files:
- path: test/*.jsonl.gz
split: test
- config_name: fr-en
data_files:
- path: test/fr-en.jsonl.gz
split: test
- config_name: ru-en
data_files:
- path: test/ru-en.jsonl.gz
split: test
- config_name: de-en
data_files:
- path: test/de-en.jsonl.gz
split: test
- config_name: zh-en
data_files:
- path: test/zh-en.jsonl.gz
split: test
Dataset Card for MTEB Benchmark
Dataset Description
- Homepage: https://github.com/embeddings-benchmark/mteb-draft
- Repository: https://github.com/embeddings-benchmark/mteb-draft
- Paper: soon
- Leaderboard: https://docs.google.com/spreadsheets/d/14P8bdEzsIgTGGlp9oOlMw-THrQbn2fYfZEkZV4NUBos
- Point of Contact: [email protected]
Dataset Summary
MTEB is a heterogeneous benchmark that has been built from diverse tasks:
- BitextMining: BUCC, Tatoeba
- Classification: AmazonCounterfactualClassification, AmazonPolarityClassification, AmazonReviewsClassification, Banking77Classification, EmotionClassification, ImdbClassification, MassiveIntentClassification, MassiveScenarioClassification, MTOPDomainClassification, MTOPIntentClassification, ToxicConversationsClassification, TweetSentimentExtractionClassification
- Clustering: ArxivClusteringP2P, ArxivClusteringS2S, BiorxivClusteringP2P, BiorxivClusteringS2S, MedrxivClusteringP2P, MedrxivClusteringS2S, RedditClustering, RedditClusteringP2P, StackExchangeClustering, StackExchangeClusteringP2P, TwentyNewsgroupsClustering
- Pair Classification: SprintDuplicateQuestions, TwitterSemEval2015, TwitterURLCorpus
- Reranking: AskUbuntuDupQuestions, MindSmallReranking, SciDocs, StackOverflowDupQuestions
- Retrieval: ArguAna, ClimateFEVER, CQADupstackRetrieval, DBPedia, FEVER, FiQA2018, HotpotQA, MSMARCO, MSMARCOv2, NFCorpus, NQ, QuoraRetrieval, SCIDOCS, SciFact, Touche2020, TRECCOVID
- STS: BIOSSES, SICK-R, STS12, STS13, STS14, STS15, STS16, STS17, STS22, STSBenchmark
- Summarization: SummEval
All these datasets have been preprocessed and can be used for your experiments.